I need a custom /etc/krb5.conf on my AWX worker nodes so they can log into Windows boxes in my company. I could do this with other files by copying the file to localhost as an initial task. I cannot do this with krb5.conf, because it’s owned by root. Stymied.
I assume my only path forward is to create a custom EE image with my file baked in. The admin I’m replacing could do this, but he left without documentation. Is there a guide somewhere on how to do this?
I have a configmap with the contents of the krb5.conf I need, and that file is a part of the web, task, and ee running nodes. But it is not a part of the ephemeral worker nodes. I don’t know the process of modifying the worker nodes.
As far as I know, you can only do this by referencing PVCs that can be bound to the containers using the extra_volumes, web_extra_volume_mounts, task_extra_volume_mounts, and ee_extra_volume_mounts directives, likes so:
extra_volumes: |
name: ssh-config
persistentVolumeClaim:
This volume claim (and its matching volume) must be created manually
claimName: ssh-config
name: static-data
persistentVolumeClaim:
This volume claim (and its matching volume) must be created manually
claimName: arkcase-static-data
web_extra_volume_mounts: |
name: ssh-config
mountPath: /etc/ssh/ssh_config.d
name: static-data
mountPath: /var/lib/projects
task_extra_volume_mounts: |
name: ssh-config
mountPath: /etc/ssh/ssh_config.d
ee_extra_volume_mounts: |
name: ssh-config
mountPath: /etc/ssh/ssh_config.d
This is what I ended up doing to distribute specific SSH configurations that were required (not keys, but rather configurations the ssh client should apply when attempting connections).
I did look at the source code while researching this and I didn’t see a clean way to do more “arbitrary” volume mounts here. The issue stems from the schema for the AWXS definition file restricting the kinds of “magic” one can do with it. I have no doubt something like what you suggest could be supported, but it would definitely require quite a few changes in several places.
Yeah this is actually a good approach - a custom pod spec. It’s a shame there’s no simpler way to implement this b/c having to specify an entire pod for the automation group carries its own risks of misconfigurations (due to ignorance, mostly) leading the workers astray.
But yes - this approach of customizing the pod for the group should definitely work.
Thank you, @kurokobo! This looks like exactly what I need. I did a quick, naive implementation from your outline with my namespace and the pod in the worker node failed with:
{“status”: “error”, “job_explanation”: “Failed to JSON parse a line from transmit stream.”} {“eof”: true}
I’ve been in meeting after meeting all day, and I have a couple more in front of me, so I doubt I’ll be able to push at this for a day or so, but it sure looks like exactly what I need. Thank you! I’ll dig in soonest.