Hi
I’d like to deploy AWX on a 3-node cluster. I’m having 3 VMs on my on-prem hypervizor with Ubuntu 22.04. They all mounted an NFS export (/data) from another NFS VM (also Ubuntu). So my idea is to let AWX/K8S access the storage via host-path.
I was able to install the cluster with microk8s (including DNS, MetallLB and Ingress as AddOns) but then I struggled to deploy AWX on it since microk8s seems have some specialities in it.
All AWX tutorials I found googling around were related to single node installations which is not what I’d like to build.
I’m open to use something else than microk8s, but the VM OS should remain Ubuntu.
Is there anybody having an end-to-end turorial (incl. AWX and K8S Cluster) or at least some valuable hints (which k8s distro fits best, easiest, etc) how to build an infrastructure as described?
I’d appreciate you help!
Thanks!
Hi, what were the actual problems for you to deploy AWX on microk8s? If your storage provisioner, ingress controller, and load balancer are configured correctly, there are less differences between deploying on single node cluster and deploying on multi nodes cluster.
Sorry if this is a stupid recommendation but before deploying AWX Operator and AWX, I recommend you to test your cluster with minimal web app by some steps like this, to ensure your LB and ingress work as expected: Question about adding remote EE node to AWX k8s cluster - #9 by kurokobo
This should work, but if you are interested in more native way in K8s, you can create nfs
based PV instead of hostPath
based one: Persistent Volumes | Kubernetes. Also there are good CSI for NFS: https://microk8s.io/docs/how-to-nfs
Hi @kurokobo ,
thanks for your swift reply!
The issue I’m facing is related to “kubectl”. When trying to deploy awx-operator with ‘microk8s kubectl’ I get following error:
$ microk8s kubectl apply -k .
error: accumulating resources: accumulation err=‘accumulating resources from ‘github.com/ansible/awx-operator/config/default?ref=2.15.0’: evalsymlink failure on ‘/home/t9admin/00_sources/github.com/ansible/awx-operator/config/default?ref=2.15.0’ : lstat /home/t9admin/00_sources/github.com: no such file or directory’: failed to run ‘/snap/microk8s/6750/usr/bin/git fetch --depth=1 GitHub - ansible/awx-operator: An Ansible AWX operator for Kubernetes built with Operator SDK and Ansible. 🤖 2.15.0’: fatal: unable to find remote helper for ‘https’
: exit status 128
I found this thread which pointed me to the direction of ‘kubectl’: kustomize feature of kubectl does not function · Issue #3988 · canonical/microk8s · GitHub
So I tried to install ‘kubectl’ (Install and Set Up kubectl on Linux | Kubernetes) side-by-side with ‘microk8s kubectl’ and then I get the next error:
$ kubectl apply -k .
error: accumulating resources: accumulation err=‘accumulating resources from ‘github.com/ansible/awx-operator/config/default?ref=2.15.0’: evalsymlink failure on ‘/home/t9admin/00_sources/github.com/ansible/awx-operator/config/default?ref=2.15.0’ : lstat /home/t9admin/00_sources/github.com: no such file or directory’: no ‘git’ program on path: exec: “git”: executable file not found in $PATH
This looks to me I have to install ‘git’ as well(?)
This all lead me to the question if microk8s is maybe not the best “k8s flavour” for AWX?
Anyway, thanks a lot for your links regarding the storage implementation And yes, you’re propably right, I should start with a minimal app to see/learn how things happen.
Thanks again for your support - any more hints much appreciated!
@kurokobo ,
I just read the How to NFS article you posted.
How would my config (awx-deploy.yml) look like if I’d like to use that persistent volume (awx-pvc)?
(can I use the same pvc for the managed postgres db?)
Here my planed configs:
storageclass-nfs.yml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nfs-csi
provisioner: nfs.csi.k8s.io
parameters:
server: 10.9.30.40
share: /data/nfs
reclaimPolicy: Delete
volumeBindingMode: Immediate
mountOptions:
- hard
- nfsvers=4.2
pvc-nfs.yml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: awx-pvc
spec:
storageClassName: nfs-csi
accessModes: [ReadWriteOnce]
resources:
requests:
storage: 20Gi
awx-deploy.yml
apiVersion: awx.ansible.com/v1beta1
kind: AWX
metadata:
name: awx
spec:
admin_user: admin
admin_password_secret: awx-admin-default-password
projects_persistence: true
projects_existing_claim: awx-pvc
service_type: LoadBalancer
service_annotations: |
environment: awx
service_labels: |
environment: awx
ingress_type: ingress
hostname: awx.demo.com
ingress_tls_secret: awx
ingress_annotations: |
environment: awx
postgres_configuration_secret: awx-postgres-configuration
postgres_storage_class: awx-pvc
postgres_resource_requirements:
requests:
cpu: 500m
memory: 2Gi
limits:
cpu: 1000m
memory: 4Gi
Regards!
Yes, if you want to use kubectl
outside of microk8s
, you have to have git
executable as well, since the resources
in your kustomization.yaml
refers remote git repository. It is not uncommon to have a git repository in resources
in kustomization.yaml
. I don’t think that the microk8s is not particularly suited to AWX, simply the problem is that git in snap is out of date.
If you want to follow that docs, you have to install CSI driver for NFS first. Have you installed it in the first place?
Then you should note that the PVC for projects should be manually created and your configuration seems correct. However PVC for PSQL is automatically created by AWX Operator, so the only thing you can do is specifying StorageClass in postgres_storage_class
. In your case, postgres_storage_class
should be nfs-csi
instead of awx-pvc
.
You can not share single PVC for projects PVC and PSQL PVC.
@kurokobo ,
I was able to successfully deploy AWX.
After installing CSI driver for NFS I created two StorageClasses, one for awx-projects and one for awx-psql, both using nfs-csi driver, pointing to two different nfs exports.
Then I manually created the PVC for awx-projects.
For the awx-psql PVC I specified the StorageClass in my awx-deploy.yml file:
...
postgres_storage_class: awx-nfscsi-psql
postgres_data_volume_init: true
...
Thanks a lot for you support!
May I ask you another question regarding the next problem I’m facing?
AWX now is reachable over http/80 and I’d like to bring it to https/443.
I enabled the MetalLB addon in microk8s, specifing an external IP:
oy file:
microk8s enable metallb:10.9.30.50-10.9.30.50
I created a TLS secret with the SSL cert in the awx namespace. In my awx-deploy file I specify Ingress as follows:
...
ingress_type: ingress
ingress_hosts:
- hostname: awx.demo.local
tls_secret: awx
ingress_annotations: |
environment: awx
...
Service Type is:
environment: awx
...
service_type: LoadBalancer
loadbalancer_ip: 10.9.30.50
...
As stated above this works with http/80. The external IP is assigned to awx-service:
$ kubectl get all -n awx
NAME READY STATUS RESTARTS AGE
pod/awx-web-f86557d4c-5828t 3/3 Running 0 25h
pod/awx-migration-24.2.0-wwb5z 0/1 Completed 0 25h
pod/awx-task-684db4b4f8-t6lgh 4/4 Running 0 25h
pod/awx-postgres-15-0 1/1 Running 0 25h
pod/awx-operator-controller-manager-9874d5cfc-mszc8 2/2 Running 0 25h
pod/awx-task-684db4b4f8-5b8bl 4/4 Running 0 17h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/awx-operator-controller-manager-metrics-service ClusterIP 10.254.255.46 <none> 8443/TCP 25h
service/awx-postgres-15 ClusterIP None <none> 5432/TCP 25h
service/awx-service LoadBalancer 10.254.255.28 10.9.30.50 80:31569/TCP 25h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/awx-web 1/1 1 1 25h
deployment.apps/awx-operator-controller-manager 1/1 1 1 25h
deployment.apps/awx-task 2/2 2 2 25h
NAME DESIRED CURRENT READY AGE
replicaset.apps/awx-web-f86557d4c 1 1 1 25h
replicaset.apps/awx-operator-controller-manager-9874d5cfc 1 1 1 25h
replicaset.apps/awx-task-684db4b4f8 2 2 2 25h
NAME READY AGE
statefulset.apps/awx-postgres-15 1/1 25h
NAME COMPLETIONS DURATION AGE
job.batch/awx-migration-24.2.0 1/1 3m59s 25h
But when I add the following to my awx-deploy.yml file, http/80 stops working and https/443 doesn’t work neither:
service_type: LoadBalancer
loadbalancer_ip: 10.9.30.50
loadbalancer_protocol: https
loadbalancer_port: 443
The browser states ERR_SSL_PROTOCOL_ERROR
.
I’m not sure where to dig deeper. Might it be an issue with the certificate (chain not included) or then I read in the awx-operator docs (awx-operator/docs/user-guide/network-and-tls-configuration.md at devel · ansible/awx-operator · GitHub) that service_type: LoadBalancer
uses SSL termination and offloads traffic to AWX over http. This I don’t really understand what it means… should I then somehow assign the TLS secret to the LoadBalancer?
I was also wondering if it’s possible to redirect http → https, if someone calls awx over http://… ?
Anyway, if you could give me a tip on where to start, I would be very grateful.
Regards!
I’m glad you were able to move forward
Typically, Ingress resource is used to access web applications hosted on the K8s cluster via HTTPS.
[The browser]
-> HTTPS -> [Ingrress (TLS terminated)]
-> HTTP -> [Service (Cluster IP)]
-> HTTP -> [Web application pod]
You are still on microk8s, so you should:
- Enable both metallb and ingress
- Make your ingress controller to listen on 80 and 443, and to use metallb with external IP address
- Refer to the official docs. The
Setting up a MetalLB/Ingress service
section is important for you: https://microk8s.io/docs/addon-metallb
- Refer to the official docs. The
This will cause:
- Ingress controller listens on 443 with external IP (via
LoadBalncer
typed Service for ingress controller) - Ingress controller receives your HTTPS traffic and routes it to specific Service resource by following ingress resource you’ve defined
Then your AWX should be:
ingress_type: ingress
ingress_hosts:
- hostname: awx.demo.local
tls_secret: awx
ingress_annotations: |
environment: awx
service_type: ClusterIP
Finally this is the how everything work (It’s actually a bit more complicated, but I’ll write it down in a nutshell):
[The browser]
-> HTTPS -> [Service (LoadBalancer) for Ingress Controller with External IP with metallb]
-> HTTPS -> [Ingress Controller pod that referes Ingress for AWX with TLS termination]
-> HTTP -> [Service (Cluster IP)]
-> HTTP -> [AWX pod]
@kurokobo , thanks, I’ll give it a try
Hi @kurokobo
just wanted to say that, thanks to your support, AWX now runs on my 3-node microk8s cluster (over https) THANKS!
After playing around with it and testing it I figured out, that outbound traffic from AWX (e.g. running ping playbook) uses randomly the node IP address of one of the 3 nodes as source address. Is this an expected behaviour or is there a way to “bundle” traffic to the outside world over one single vIP address (similiar to ingress but in opposite direction)?
Best regards!
This is probably intended behavior, since any playbook on AWX is invoked inside an ephemeral automation job pod, and this pod is created on the one of the available nodes.
As you mentioned, the opposite direction of ingress is called egress, and making the source IP addres of outbound traffic to be predictable is usually achieved by a feature called an egress gateway. It’s not a pure Kubernetes implementation but is provided by CNI plugins like Calico. I’m not sure if it is available on Calico in microk8s, and I don’t have any experience configuring an egress gateway, so I can’t provide further advice on this.
Alternatively you can configure SNAT on your router, but I don’t know if this is available option for you.
To close the thread, could you mark one of the comments as the solution if there are no additional questions? Thanks!
Thanks again for your explanation!
I will close the thread
Best regards!
This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.