Delegation issues

,

Hello Folks,

We were using AWX 15.x for quite a time & used “delegate_to: 127.0.0.1 or localhost” quite effectively to run tasks on controller node.

Recently on upgrade to AWX 23.x, we found issues with delegation.

delegate_to: 127.0.0.1
Above is now running on the execution environment pod if i understand right. This fails as we have firewall policies tied to controller host & the pod cannot connect to other hosts. This delegation is needed for just a handful of steps & all others work good.

Was able to get around by coding the name of AWX master but this doesn’t look flexible as playbooks are not tied to this instance & we loose portability.
delegate_to: “AWX master”

Kindly share thoughts what’s the best strategy to delegate to AWX master.

Hi,
As you mentionned, i think the best is to have a firewall rule that allow traffic from pods subnet to the target subnet/hosts you need to work on.

That’s how I manage it in my situation.

Ouch, we tried that approach for another use case but network team struck it down that source sub-nets won’t be allowed. It has to be precise ip.

Just wondering, how do i figure out what subnet will AWX use to spawn pods?

I’m not a Kubernetes expert at all but workers takes IP from DHCP. So except Ingress that is static, you can’t have a precise ip unfortunately. Moreover, everytime you launch a job it creates a temporary pod which takes IP from a DHCP subnet.

Imho you have to specify a subnet to make it works.

Here’s an analogy but it’s like having a car, you have to press brake pedal to brake, but network team tell you to use something else cause they don’t know what braking is…

Unless I’m misunderstanding something here, the pods inside a kubernetes cluster get launched with an internal IP from an internal IP CIDR yes (defaulting to the 172 subnet (certain distributions may vary, but you get the point)). However, when the pods go and reach for things outside of the kubernetes cluster, such as Ansible targets, the real network your Network guys probably care about will see traffic originating from the kubernetes host’s IP. Your network team needs to allow the kubernetes hosts’s IP addresses, not the kubernetes clusters’s internal cluster IP addresses for pods.

I have no idea why your delegation to localhost would be failing, to the OP’s question. However, I can’t imagine your Network team would somehow be able to insert a firewall from inside the kubernetes cluster itself. I think it’d also need to block the pod from talking to itself, too. That’d be quite a feat.

1 Like

@mcen when you say “Your network team needs to allow the kubernetes hosts’s IP addresses, not the kubernetes clusters’s internal cluster IP addresses for pods.”

Are we agree that hosts IP address are most of time delivered by DHCP ? (i mean every time we provide a k8s cluster we have to tell cidr subnet used for nodes)

1 Like

Ohhh, I see. The kubernetes hosts are getting DHCP assignments. That was the missing piece for me. My apologies. Though I dunno if the OP’s org is provisioned that way.

If they are, seems odd for a Network team to permit DHCP assignments but then disallow CIDRs for rules, but Network people can be odd.

1 Like

That’s more or less what i said 3 comments earlier :grin:

1 Like

If you delegate to localhost literally, Ansible uses the local connection plugin defined by default for this host, no network connection of any kind. If you use localhost instead of 127… does it work?