Playbook unable to write to persistent volume

AWX 21.12.0 installed using GitHub - kurokobo/awx-on-k3s: An example implementation of AWX on single node K3s using AWX Operator, with easy-to-use simplified configuration with ownership of data and passwords.

WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short.  Use --output=yaml|json to get the full version.
Client Version: version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.6+k3s1", GitCommit:"9176e03c5788e467420376d10a1da2b6de6ff31f", GitTreeState:"clean", BuildDate:"2023-01-26T00:47:47Z", GoVersion:"go1.19.5", Compiler:"gc", Platform:"linux/amd64"}
Kustomize Version: v4.5.7
Server Version: version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.6+k3s1", GitCommit:"9176e03c5788e467420376d10a1da2b6de6ff31f", GitTreeState:"clean", BuildDate:"2023-01-26T00:47:47Z", GoVersion:"go1.19.5", Compiler:"gc", Platform:"linux/amd64"}```

I have a very simple playbook that uses the community.vmware.vmware_cfg_backup module to retrieve ESXi configuration data and save it to disk.
This fails when it tries to save the data to a PV with an error stating that '/data/esxi-configconfig-backups/<FILE>.tgz' does not exist.
The PV and PVC definitions look good and kubectl lists them as 'Bound'

$ kubectl -n awx get pvc esxi-config-backups-claim
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
esxi-config-backups-claim Bound esxi-config-backups-volume 4Gi RWO esxi-config-backups-volume 20h
$
$ kubectl -n awx get pv esxi-config-backups-volume
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
esxi-config-backups-volume 4Gi RWO Retain Bound awx/esxi-config-backups-claim esxi-config-backups-volume 21h
m

The playbook look like this

  • name: Get the ESXi configuration
    hosts: esxi
    gather_facts: no

    tasks:

    • name: Include vault secrets
      ansible.builtin.include_vars:
      file: esxi_root_password.yaml

    • name: Get the configuration
      community.vmware.vmware_cfg_backup:
      hostname: ‘{{ inventory_hostname }}’
      username: ‘{{ esxi_user }}’
      password: ‘{{ esxi_pass }}’
      state: saved
      dest: /data/esxi-config-backups/{{ inventory_hostname }}.tgz
      validate_certs: false
      delegate_to: localhost

What do I need to do to make the PV accessible to the playbook.
Please be aware that my knowledge of Kubernetes (k3s) is very very limited.

Hi, thanks for using my guide :smiley:

You’ve created PV and PVC well, but have you created and specified Container Group with customized pod spec that mount your PVC on automation job pods?

Hi,
thanks for the quick response.
As I mentioned earlier, my knowledge of Kubernetes is very limited, can you point me to some examples of creating a Container group.

Container Group is not kubernetes-thing, but AWX-thing.

My guide also provides some guide to use Container Group, I think this guide is a bit excessive for your requirements: https://github.com/kurokobo/awx-on-k3s/tree/main/containergroup

For your use case:

Create new Container Group by Administration > Instance Group > Add > Add container group, and enable Customize pod specification, then define specification as following:

apiVersion: v1
kind: Pod
metadata:
  namespace: awx
spec:
  serviceAccountName: default
  automountServiceAccountToken: false
  containers:
    - image: quay.io/ansible/awx-ee:latest
      name: worker
      args:
        - ansible-runner
        - worker
        - '--private-data-dir=/runner'
      resources:
        requests:
          cpu: 250m
          memory: 100Mi
      volumeMounts:
        - name: demo-volume
          mountPath: /data/esxi-config-backups
  volumes:
    - name: demo-volume
      persistentVolumeClaim:
        claimName: esxi-config-backups-claim

Then specify this Container Group as Instance Groups in the Job Template.

That worked!
Thank you so much.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.