Running two versions of awx in the same K8s cluster?

We have awx 15.0.1 running, installed via AWX Operator 0.6.0.

I’d like to run up an awx 19.2.2 instance to test before moving over. Is it possible to do this on the same cluster? Does the 0.12.0 operator understand the old AWXS CRD as well as the new one? Or can you run two Operators that will each deal with only their own version? (I think no, since the API version number is the same in both AWXS CRDs)

Does anyone else do this? I just want to check out EEs (and have others look too, so minikube doesn’t really help) before plunging into using 19.2.2

Yes, I do that. Just create another namespace and edit your yaml file for the Operator.

Note that I am using AWX Operator 0.12.0. I have several different versions of AWX in my GKE cluster. Using different namespaces for each one.

Tin

Hello Tin,

Would you please share changes needed to limit the operator install to a custom namespace, including the operator, CRDs, and AWX instance?
Based on what I read, it appears that we can use a custom namespace for creating the instance, but not sure what needs to be done for the operator install itself.

Thanks

The kustomize.yaml does it.

https://github.com/kurokobo/awx-on-k3s/blob/main/base/kustomization.yaml

See?

Look at namespace and see how it is including the operator yaml.

Thanks again. If my understanding is correct, the operator is still a separate install, it’s AWX instance and other resources that are namespace bound. Would it be possible to limit the operator as well to a namespace?

Have a look at the kustomize.yaml and look at the name space.

nes (30 sloc) 610 Bytes

Sorry for not making it clear. I see that AWX instances can be created in difference namespace, I was referring to limiting the operator install itself to a single namespace(not cluster-scoped). I see that https://github.com/ansible/awx-operator/blob/0.12.0/deploy/awx-operator.yaml has few hard-coded entries with ‘namespace: default’, I will change these in the local copy and see if that would work since all other CRDs are supposed to work in ’ scope: Namespaced’.

Thanks

That’s actually a kubectl thing then. Not specific to awx. Look at this example

kubectl apply -f pod.yaml --namespace=test

But does that limit the Operator to only looking for CRDs in its own namespace? That's not how they normally work. For e.g. nginx ingress, you have to start defining special classes to filter which CRDs it is interested in.

It'd be really useful to be able to test the next Operator+awx version in a different namespace of the same cluster...

I used the same operator, version 0.12.0, for the different versions of AWX in the different namespaces.

AFAIK, operator 0.12.0 works fine for any version of AWX from 19.0 up. I built my own AWX image and pushed it to our own image repo due to security reasons.

Basically, you specify the namespace in the ‘metadata’ section of your yml deploy file.

Here is a snippet of mine:

I have installed the operator and AWX instance into the namespace, had to update serviceaccount and clusterrolebinding YAML entries to change from ‘default’ to a custom namespace. I now see the operator CR, CRDs, pods and other resources in the same namespace as the instance where it’s configured to run. I believe this would help to keep each AWX set up separate from other.
BTW, we also have a need to customize quay.io/ansible/awx:19.2.2 image to add our own tools and push to an internal registry, where can we see Dockerfile for these images to get an idea on included tools and utilities?

Thanks

The instructions are here: https://github.com/ansible/awx/blob/devel/docs/build_awx_image.md

Tin

What tools do you need to add? Awx19 uses execution environments now which runs in a container.

I can’t speak for Cnu, but I built my own AWX images for security purposes. I also build custom EE images for security, and to include additional + custom roles.

Tin

What security things did you put in it? Curious to know

Have you heard of SolarWinds supply chain attack? The reason I built my own images is so all the components are pulled from our internal repo. They have already been scanned by our tools for security vulnerabilities. By building with our tool chain, using libraries and components from our repo, we can be more confident that they are safe to use.

Tin

Good call. Something to consider

Thank you Tin for the repo information and your approach to build custom images to reduce what’s getting into the network. We’re using Artifactory as a container registry and for storing all other binary artifacts as well. We’re trying to restrict external images but not there yet.
I will try to find custom docker build image for the awx_ee containers as well and give that a try.

Thanks

You can write your own