Help needed for error with ansible-test and podman

I’m trying to run the test for the community.aws collection using ansible-test. I’m using the command:

 ansible-test integration ec2_vpc_igw --docker -v

However, I’m getting the below errors. I’m using ansible-test 2.15.4 and podman 4.6.2 and also have podman-docker installed.

Any ideas how to fix the errors?

Configured locale: en_US.UTF-8
Falling back to tests in "tests/integration/targets/" because "roles/test/" was not found.
Using existing aws cloud config: tests/integration/cloud-config-aws.ini
Run command: docker -v
Run command: podman -v
Detected "podman" container runtime version: podman version 4.6.2
Run command: podman system connection list --format=json
Run command: podman info --format '{{ json . }}'
Run command: podman version --format '{{ json . }}'
Container runtime: podman client=4.6.2 server=4.6.2 cgroup=v2
Assuming Docker is available on localhost.
Run command: podman image inspect quay.io/ansible/ansible-test-utility-container:2.0.0
Run command: podman run --volume /sys/fs/cgroup:/probe:ro --name ansible-test-probe-bqd2dtE3 --rm quay.io/ansible/ansible-test-utility-container:2.0.0 sh -c 'audit-status  ...
Container host audit status: ECONNREFUSED (-111)
Container host max open files: 524288
Container loginuid: 0
Starting new "ansible-test-controller-bqd2dtE3" container.
Run command: podman image inspect quay.io/ansible/default-test-container:7.14.0
Run command: podman network inspect podman
Run command: podman run --tmpfs /tmp:exec --tmpfs /run:exec --tmpfs /run/lock --cap-add SYS_CHROOT --systemd always --cgroupns private -dt --ulimit nofile=10240 --name ans ...
ERROR: Command "podman run --tmpfs /tmp:exec --tmpfs /run:exec --tmpfs /run/lock --cap-add SYS_CHROOT --systemd always --cgroupns private -dt --ulimit nofile=10240 --name ansible-test-controller-bqd2dtE3 --network podman quay.io/ansible/default-test-container:7.14.0" returned exit status 126.
>>> Standard Error
Error: OCI runtime error: crun: chmod `run/shm`: Operation not supported
WARNING: Failed to run docker image "quay.io/ansible/default-test-container:7.14.0". Waiting a few seconds before trying again.
Run command: podman stop --time 0 ansible-test-controller-bqd2dtE3
Run command: podman rm ansible-test-controller-bqd2dtE3
Run command: podman run --tmpfs /tmp:exec --tmpfs /run:exec --tmpfs /run/lock --cap-add SYS_CHROOT --systemd always --cgroupns private -dt --ulimit nofile=10240 --name ans ...
ERROR: Command "podman run --tmpfs /tmp:exec --tmpfs /run:exec --tmpfs /run/lock --cap-add SYS_CHROOT --systemd always --cgroupns private -dt --ulimit nofile=10240 --name ansible-test-controller-bqd2dtE3 --network podman quay.io/ansible/default-test-container:7.14.0" returned exit status 126.
>>> Standard Error
Error: OCI runtime error: crun: chmod `run/shm`: Operation not supported
WARNING: Failed to run docker image "quay.io/ansible/default-test-container:7.14.0". Waiting a few seconds before trying again.
Run command: podman stop --time 0 ansible-test-controller-bqd2dtE3
Run command: podman rm ansible-test-controller-bqd2dtE3
ERROR: Host DockerConfig(python=NativePythonConfig(version='3.11', path='/usr/bin/python3.11'), name='default', image='quay.io/ansible/default-test-container:7.14.0', memory=None, privileged=False, seccomp='default', cgroup=CGroupVersion.V1_V2, audit=AuditMode.REQUIRED) job failed:
Failed to run docker image "quay.io/ansible/default-test-container:7.14.0".
FATAL: Host job(s) failed. See previous error(s) for details.
1 Like

Hi,

privileged=False

Here is almost certainly your issue; RW access to /run/shm requires your container to run in privileged mode AFAIK.

Edit: Alternatively, it looks like you could also run it unprivileged by just mounting /run as tmpfs as well; see Running systemd in a non-privileged container | Red Hat Developer.
This would be a much better idea on a security standpoint. Have a try !

2 Likes

I’m using podman and not docker so it seems like that article is not relevant as it states podman has native support for running systemd containers. The podman command being run has the tmpfs mounts as mentioned in the article.

I did try running ansible-test with the --docker-privileged option, but it returned the same errors.

I found the issue. There was a recent kernel change that resulted in a crun bug. See container with systemd don't start · Issue #1308 · containers/crun · GitHub for details.

I was on the affected crun 1.9 version and upgrading crun to version 1.9.2 fixed the issue.

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.