After I’m stumbling over this problem and searching hours in similar problems, a final clarification would help me a lot.
I’m running AWX on k3s using awx-operator and this guide: https://github.com/kurokobo/awx-on-k3s
I’m syncing my project and my inventory via SCM.
The directory structure looks like this starting from top level directory:
.
./playbook
./inventory
./inventory/host_vars
./inventory/group_vars
In playbook/ all my playbooks reside, in inventory all my inventory files are there and I configured in ansible.cfg to use as inventory file the whole inventory directory.
I followed the tips from here:
https://docs.ansible.com/ansible/latest/tips_tricks/ansible_tips_tricks.html#inventory-tips Section “Keep vaulted variables safely visible”.
So all passwords for all my servers are vaulted. It’s perfectly working on ansible CLI.
I also configured an ansible.cfg with the following relevant content:
vault_id_match = True
vault_identity_list = id1@/path/to/ansible-vault-secret.id1,id2@/path/to/ansible-vault-secret.id2
In these 2 files, my ansible vault password together with the id is stored.
This file is clearly not in SCM.
I configured in AWX 2 vault credentials with same id and password I’m using above.
I can’t get any playbook to decrypt the corresponding vaulted string (it is the sudo password of the machines I’m targeting with ansible).
Can someone please enlighten me how to get this setup working on AWX?
do the results change if you remove the ansible.cfg from the project directory and just pass the vault credential in?
AWX Team
The output when I remove the ansible.cfg from root level of my project is the following:
ansible-playbook [core 2.15.2rc1]
config file = None
configured module search path = [‘/runner/.ansible/plugins/modules’, ‘/usr/share/ansible/plugins/modules’]
ansible python module location = /usr/local/lib/python3.9/site-packages/ansible
ansible collection location = /runner/requirements_collections:/runner/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible-playbook
python version = 3.9.17 (main, Jun 26 2023, 00:00:00) [GCC 11.4.1 20230605 (Red Hat 11.4.1-2)] (/usr/bin/python3)
jinja version = 3.1.2
libyaml = True
No config file found; using defaults
Vault password (id1):
Vault password (id2):
host_list declined parsing /runner/inventory/hosts as it did not pass its verify_file() method
Parsed /runner/inventory/hosts inventory source with script plugin
Skipping callback ‘awx_display’, as we already have a stdout callback.
Skipping callback ‘default’, as we already have a stdout callback.
Skipping callback ‘minimal’, as we already have a stdout callback.
Skipping callback ‘oneline’, as we already have a stdout callback.
PLAYBOOK: apt-update.yml *******************************************************
1 plays in playbook/apt-update.yml
PLAY [Update the apt chache on Debian/Ubuntu] **********************************
TASK [Update apt repo and cache on all Debian/Ubuntu boxes] ********************
task path: /runner/project/playbook/apt-update.yml:10
fatal: [host01]: FAILED! => {
“msg”: “Decryption failed (no vault secrets were found that could decrypt)”
}
fatal: [host02]: FAILED! => {
“msg”: “Decryption failed (no vault secrets were found that could decrypt)”
}
[several hosts skipped]
fatal: [hostxx]: FAILED! => {
“msg”: “Decryption failed (no vault secrets were found that could decrypt)”
}
I selected both vault passwords as credentials for the job.
I don’t understand what you mean with “pass the vault credential in”. Where should I do that?
If you mean “pass the vault credential in” by just tying the created vault credentials to the job: That’s not working either.
Would it help if I create my own execution environment, where I configure an ansible.cfg like it is working with the CLI version?
Have to admit I’m just in the beginning of reading the documentation of the ansible builder.
I solved it by searching additional several hours and getting the stuff together.
The golden tip is here: https://stackoverflow.com/questions/67747550/how-can-i-expose-local-data-path-to-the-temporary-job-container-awx-job-xxxxx
I created on my single node k3s cluster a directory, where I put in a modified ansible.cfg and both files containing the vault password.
This directory is mounted into the awx-job pods (which are usually invisible because they only get created when a job is run - you can see it when running kubectl get pods -o wide -n awx -w before lunching a job)
The modified pod specification is this:
apiVersion: v1
kind: Pod
metadata:
namespace: awx
spec:
serviceAccountName: default
automountServiceAccountToken: false
containers:
- image: quay.io/ansible/awx-ee:latest
name: worker
args:
- ansible-runner
- worker
- ‘–private-data-dir=/runner’
resources:
requests:
cpu: 250m
memory: 100Mi
volumeMounts:
- name: external-ansible-config-files
mountPath: /etc/ansible/
volumes:
- name: external-ansible-config-files
hostPath:
path: /home/awx/awx-ee
type: Directory
So my local /home/awx/awx-ee folder gets mounted into the awx-job pods under /etc/ansible
Just make sure to set-up this line in ansible.cfg with the full path to use it within the container:
vault_identity_list = password1@/etc/ansible/ansible-vault-secret.password1,password2@/etc/ansible/ansible-vault-secret.password2
Note: hostPath volume type is only working on a single node cluster, for a different setup you should use a different volume type.