I have build an EE, and I am now trying to use it through ansible-runner, but it seriously seems like ansible-runner is not ment to be used manually.
I am currently using version 2.4.0 of ansible runner in on a RHEL8 box using podman, and I have created a Runner input directory structure, which seems to work:
.
├── env
│ ├── cmdline
│ └── settings
├── inventory
│ └── inventory-test.yml
├── poetry.lock
├── project
├── pyproject.toml
└── README.md
$ cat env/settings
---
container_image: localhost/my-ee:latest
process_isolation: true
rotate_artifacts: 5
I can use this to ping localhost
$ ansible-runner run -m ping --hosts localhost .
localhost | SUCCESS => {
"changed": false,
"ping": "pong"
}
The runner docs seems to indicate that when run using an EE, then it should mount in the SSH agent and/or the host users ~/.ssh (as long as it is not symlinked). Ref: Using Runner with Execution Environments — Ansible Runner Documentation
However when running a playbook that is present inside the EE, against a target, then it fails to authenticate. Adding the debug flag indicates that the runner in fact doesn’t add any mount options when executing podman, that would handle any SSH keys.
Looking at the runner code, seems to cooporate this in the wrap_args_for_containerization()
function:
# For run() and run_async() API value of base execution_mode is 'BaseExecutionMode.NONE'
# and the container volume mounts are handled separately using 'container_volume_mounts'
# hence ignore additional mount here
if execution_mode != BaseExecutionMode.NONE:
Ref: ansible-runner/src/ansible_runner/config/_base.py#L516-L519
As far as I can gather, then when running ansible-runner run
then the BaseExecutionMode
is in fact NONE
, and thus all the code inside this if statement is skipped, and thus no mounting of either the users ~/.ssh nor the SSH_AUTH_SOCK is done:
This is a snippet of the output, when running with the --debug
flag
sandbox disabled
containerization enabled
env:
....
command: podman run --rm --tty --interactive --workdir /runner/project -v /home/user/src/project/ansible-test-runner/:/runner/:Z --env-file /home/user/src/project/ansible-test-runner/artifacts/87671a22-d89a-4668-8f20-dd6d0d97bc4c/env.list --quiet --name ansible_runner_87671a22-d89a-4668-8f20-dd6d0d97bc4c localhost/my-ee:latest ansible --user root -i /runner/inventory -m ping localhost
So given the comment in the code, which runner command is this intended to actually work with? It doesn’t seem to make much sense that it would mount in ~/.ssh for the transmit/worker/process commands?
What is the expected way of running an EE manually on a server, without copying/adding a “permanent” ssh key in the env/ssh_key
file?
Also, if i am not mistaken, then the code that would have added mount options for these parths, is missing the label=":Z"
, so even if it would have added those mount options to podman, it would likely not work anyway?
- ansible-runner/src/ansible_runner/config/_base.py#L681
- ansible-runner/src/ansible_runner/config/_base.py#L687