I am currently using AWX in the k8s environment.
However, since the workflow job configuration is in 4 steps, the automation job pod is created 4 times, which has the disadvantage of a long lead time.
So I would like to permanently launch the execution environment pod that acts as the automation job pod and use it by including it in the instance group.
I am wondering if this is possible.
I also know that the execution node type created in the instance is executed by dynamically creating ee using podman, but I wonder if that is correct.
When AWX starts the execution environment (automation job pod or podman container), it injects the job template details like environment variables, credentials and project contents as well as sets the ansible-runner command which starts the playbook. This ties the individual container run to the job template run. To my knowledge, there is no way to interact with the container after it starts and, if there was, it would probably open up security concerns by unintentionally exposing credentials/functionality to future jobs.
I’m curious where you’re seeing the start-up delay. Does it seem to be related to the K8s cluster actually starting the pod or more around the K8s cluster trying to pull the EE image from the registry? Also, about how big is the project? I’ve had large projects take a long time to download into the EE which can cause job start delays.
First, job start delays caused from big project. So, I need to reduce project size.
Here’s how I’m trying to approach it:
I’m experimenting with sending the job type locally to the server, running an awx-ee container that performs the receptor, like the awx-ee image that floats along with the AWX, and then executing the compound passed in from the awx server right away.
But, when I set worktype local, cannot connect ee from awx, because of “work type did not expect a signature”