We’ve deployed AWX version 12 on Kubernertes cluster. Got multiple instances of the application running on multiple worker nodes.
AD LDAP authentication is configured in AWX GUI.
What happens now is whenever I login the authentication process happens on random nodes/instances. I was wondering whether there is any way to restrict the authentication process to specific nodes/instances.
Why am looking for this is because my K8s cluster consists of multiple nodes from different domains Nd we’ve different LDAP servers/ users under different domains. So if am logging in with user A on domain A Nd the authentication happens on instance B on domain B, it fails.
AD LDAP authentication is configured in AWX GUI.
What happens now is whenever I login the authentication process happens on
random nodes/instances. I was wondering whether there is any way to
restrict the authentication process to specific nodes/instances.
No, the idea is that all application containers are running identically
and are functionally interchangeable.
Why am looking for this is because my K8s cluster consists of multiple
nodes from different domains Nd we've different LDAP servers/ users under
different domains. So if am logging in with user A on domain A Nd the
authentication happens on instance B on domain B, it fails.
I don't know the constraints that led to setting up K8S that way, but
I'd strongly suggest not doing that. If you need to do that, restrict
placement so containers only end up in one domain.
The rationale behind setting up K8s this way was to take advantage of the instance groups functionality Nd not having multiple AWX deployments.
Assigning instances on domain A to instance group A Nd domain B to instance group B. Instance group A and instance group B manages servers in domain A and domain B respectively.
In our environment AWX is deployed in K8s Nd we’ve assigned instances to multiple instance groups based on the nodes they run on. But each time the pod/instance gets deleted, it comes back up with a different instance hostname.
Is there a way I can ensure the instances coming up on a specific node gets assigned to a specific instance group automatically? Like, instance coming up on node A gets assigned to instance group A.
Am not sure whether anyone had a similar requirement or not but if anyone has any idea how to achieve it or confirm whether it’s possible, it’ll be of great help.
As far as I know, there are three policies with Instance groups,
Minimum number of instances
Minimum percentage of all instances
Instance grouping by name
But the problem in my case is, every time the pod is restarted it gets a new hostname.