AWX 19 kubernetes

I’ve just checked the pod status on my kubenetes cluster and seeing two pods in pending status. Bit unsure why. Everything appears to be working correctly from an AWX point of view.

#kubectl get po -A

NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system local-path-provisioner-5ff76fc89d-2wjpx 1/1 Running 0 13d
kube-system metrics-server-86cbb8457f-wfg85 1/1 Running 0 13d
kube-system coredns-7448499f4d-4p2fv 1/1 Running 0 13d
kube-system helm-install-traefik-crd-jfp9h 0/1 Completed 0 13d
kube-system helm-install-traefik-pkhxv 0/1 Completed 1 13d
kube-system svclb-traefik-xdzzr 2/2 Running 0 13d
kube-system traefik-97b44b794-g9mbs 1/1 Running 0 13d
default awx-operator-69c646c48f-lnsmq 1/1 Running 0 13d
awx awx-postgres-0 1/1 Running 0 13d
awx awx-59ff55b5b-4m8wv 4/4 Running 0 13d
default awx-postgres-0 0/1 Pending 0 11s
default awx-59ff55b5b-h86gm 0/4 Pending 0 4s

#kubectl get deployments

NAME READY UP-TO-DATE AVAILABLE AGE
awx-operator 1/1 1 1 13d
awx 0/1 1 0 8d

Any ideas anyone?

Thank you

Just to add further info, I am seeing the below errors on the pods in pending state. I am unsure why this is happening now and not for the last 13 days or so?

Events:
Type Reason Age From Message


Warning FailedScheduling 9m41s default-scheduler 0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.
Warning FailedScheduling 9m39s default-scheduler 0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.

Check PV and PVCs status if you’re using static storage provisioning such as NFS or similar. Either way the error is related to PVC that’s not bound to a PV, so checking that should help.

Thanks

I’m still none the wiser.

Here are my yaml files

pv.yaml

when I check filesystems with df -h I dont see anything relating to /data/postgres or /data/projects.

Do I need to set these up before deploying?

hostPath:
path: /data/postgres

hostPath:
path: /data/projects

This message is basically telling you that it does not have a persistent volume claim that it can fulfill. You need to provide that

Cheers

If you look at my first post, you will see that the original pods are in a running state, but then I have a postgres and awx pod in pending also. Additionally, everything worked fine. I rebooted the cluster and then lost my data - not fussed about this as I am testing at the moment.

From what I have read and If you check the yaml files, should that directory not work as a persistent volume? I can see the data writes to /data/postgres

Yoy have a persistant volume. But you don’t have a Persistant volume claim. So the deployment does not know it is there. There is no glue

Try doing “kubectl get pvc”. :).

Hey, thanks for that. Something weird going on for sure…

kubectl get pvc

NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
awx-projects-claim Pending awx-projects-volume 8d
postgres-awx-postgres-0 Pending awx-postgres-volume 8d

kubectl get pv

NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
awx-projects-volume 2Gi RWO Retain Bound awx/awx-projects-claim awx-projects-volume 13d
awx-postgres-volume 2Gi RWO Retain Bound awx/postgres-awx-postgres-0 awx-postgres-volume 13d

The pv’s show as bound, but pvc as pending…

Here are my yaml files again:

awx.yaml

Ok my bad. It was there! But gives us clues for sure. Can you please describe it?

Kubect describe pvc postgres-awx-postgres-0

Some more info

k describe pvc postgres-awx-postgres-0

Name: postgres-awx-postgres-0
Namespace: default
StorageClass: awx-postgres-volume
Status: Pending
Volume:
Labels: app.kubernetes.io/component=database
app.kubernetes.io/instance=postgres-awx
app.kubernetes.io/managed-by=awx-operator
app.kubernetes.io/name=postgres
Annotations:
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Used By: awx-postgres-0
Events:
Type Reason Age From Message


Warning ProvisioningFailed 26m (x3281 over 14h) persistentvolume-controller storageclass.storage.k8s.io “awx-postgres-volume” not found
Warning ProvisioningFailed 31s (x83 over 20m) persistentvolume-controller storageclass.storage.k8s.io “awx-postgres-volume” not found

k describe pvc postgres-awx-postgres-0 -n awx

Name: postgres-awx-postgres-0
Namespace: awx
StorageClass: awx-postgres-volume
Status: Bound
Volume: awx-postgres-volume
Labels: app.kubernetes.io/component=database
app.kubernetes.io/instance=postgres-awx
app.kubernetes.io/managed-by=awx-operator
app.kubernetes.io/name=postgres
Annotations: pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 2Gi
Access Modes: RWO
VolumeMode: Filesystem
Used By: awx-postgres-0
Events:

awx-postgres-volume" not found

Does this volume exist?

if you check the postgres in awx namespace, you can see it is

I reviewed your email. If the path does not exist on the host /data/postgres and such it will bomb. The mount is a host path option

I just went through the install again…

k apply -f https://raw.githubusercontent.com/ansible/awx-operator/0.13.0/deploy/awx-operator.yaml

NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
awx-projects-volume 2Gi RWO Retain Bound awx/awx-projects-claim awx-projects-volume 11m
awx-postgres-volume 2Gi RWO Retain Bound default/postgres-awx-postgres-0 awx-postgres-volume 11m

why are the pvcs in different namespaces?

Should that be the case?

I don’t believe that should be the case, however just under the "name: awx-postgres-volume " , in your awx-postgres-volume .yaml file, you can specify the “namespace: awx”, so that the pvc is created in teh correct namespace.