How do you update your AWX configs? What´s your update strategy? #rollingUpdate

Hello there,

in short:
We noticed that not all our config changes pass through to our AWX pods via rollingUpdate Strategy while manually deleting our AWX Deployment resource and letting it be recreated by the Operator does have the desired effect in terms of getting the complete current config to our AWX pods. What might we be doing wrong?

We are running AWX via AWX Operator (latest version) on a Kubernetes (Openshift) Cluster.

Our deployment logic boils down to


oc kustomize kustomize/overlays/$ENV_NAME | oc apply -f -

Since we have rollingUpdate Strategy (default I think) configured, our deployment resource in Openshift creates a ReplicaSet every time we oc apply changes and begins to replace pods.

Our problem starts here: New pods are created as replacement for the previous ones, but those new pods are not necessarily holding all the new configuration.

We noticed that changes to settings like e.g. the following do not propagate to new pods automatically spawned via ReplicaSet triggered by our oc apply -f -

redis_resource_requirements: limits: cpu: 104m 

Once we delete our AWX deployment resource in Openshift, it gets recreated with all new config settings applied before. The update info just does not pass through via rollingUpdate Strategy for us.

So we are wondering: what might we be doing wrong? :slight_smile:

So basically what we are looking for is a trigger to update the AWX Deployment resource every time we update the AWX custom resource via GitOps (oc apply -f -)

Our goal is to achieve a similar effect as when we delete the Deployment resource (and let it be recreated with the latest configuration), but using the Rolling Update strategy (so no downtime).

Any hints in that direction would be very much appreciated! :slight_smile:

What version are you on? This was fixed in a recent AWX version, 1.3.0, as part of https://github.com/ansible/awx-operator/pull/1222.

More context on this issue:

Thanks,
AWX Team

I recently came across 1239 as well as its continuation issue 1275, where I described the problem in more detail.

Since 1275 covers the same problem, I guess we should continue there.
Thank you! :slight_smile: