Control nodes in AWX job execution

I have AWX deployed with awx-operator in 3-4 different region k8s clusters with a single backend database.
The issue is by default awx-task pods of all 3 regions are added to the control plane instance group, and it cannot be removed per environment.
So, suppose when the job is triggered from the US there is a chance that the job will fail if it is assigned to control nodes apart from the US, as the final nodes on which the job needs to be executed has no connectivity between the EU awx-task pod to the US server.

Is there a way to resolve this?

As far as I understand, I believe that AWX is not designed to allow the situation that there is no connectivity between control nodes.

According to the design pattern of the automation mesh, all nodes in the mesh are connected, and none are isolated.

In the first place, I think sharing a database across multiple disconnected instances is not a typical use case. That configuration may not have been tested well and could lead to unexpected side effects.

I recommend deploying the control nodes in a single region and connecting between regions using hop nodes or execution nodes. Depending on your security requirements, there might be restrictions on the direction of communication between regions, but inbound and outbound communications can be established between the control nodes and hop or execution nodes by correct configuration.

1 Like

Ok understood!
I can have the connectivity between the GKE cluster pods i.e. Pods or service from one cluster can communicate to other one.
But how do I configure the hop nodes and execution nodes to correctly identify, listen, communicate and assign the triggered Job to the correct execution node based on the Instance Group selected.