I keep getting this error when I run a playbook with a docker task whose state is set to ‘reloaded’:
fatal: [10.0.3.22]: FAILED! => {“changed”: false, “failed”: true, “module_stderr”: “”, “module_stdout”: “Traceback (most recent call last):\r\n File "/tmp/ansible_T257mO/ansible_module_docker.py", line 1972, in \r\n main()\r\n File "/tmp/ansible_T257mO/ansible_module_docker.py", line 1942, in main\r\n reloaded(manager, containers, count, name)\r\n File "/tmp/ansible_T257mO/ansible_module_docker.py", line 1792, in reloaded\r\n for container in manager.get_differing_containers():\r\n File "/tmp/ansible_T257mO/ansible_module_docker.py", line 1305, in get_differing_containers\r\n name, value = container_label.split(‘=’, 1)\r\nValueError: need more than 1 value to unpack\r\n”, “msg”: “MODULE FAILURE”, “parsed”: false}
I only started getting this recently after I started using ansible 2.1. The docker task in question hasn’t been modified in a long time, and if I manually remove the container from the host and re-run the playbook, it starts the container ok.
It’s a git clone of the stable-2.1 branch. I recently (like, today) did a ‘git pull’ in addition to ‘git submodule update --recursive’ and tried again, still got the same error.
I just tried switching back to the stable-2.0.0.1 branch and it seems to be working fine, so it appears to be a specific 2.1 issue.
But with older ansible, it did not showed an error, it just reloaded the container, i havent looked into it. I know it reloads when it should not, like it is reloading for a container that did not change, eitherway, the PR make it run again.
If you have Labels at the Dockerfile, the container will restart if you did not add them to the ansible task. Not sure if this should be the behavior, but it is.
Also i think ansible 2.2 will not have this issue, as it is deprecating the docker module and using a newer one.
Hmm, no that didn’t fix it for me unfortunately. I updated the docker-engine package to the latest and also pulled the latest ansible stable-2.1 commits, but it still fails with the same error. I’ll try changing over to the new docker_container module instead and see if that fixes it for me.
Ok, I changed this to use docker_container and it seems to be working ok now.
One thing I am curious about though, is that every time this task runs it registers as ‘changed’ even though the restart policy is explicitly set to ‘no’. None of the module arguments have changed between runs, so is there a valid reason why this is happening?
It’s been a while but I figured I’d bump this to see if anyone has any insight into my question above about why ansible insists on restarting containers even though the task config hasn’t changed.
I pulled the latest stable-2.1 commits and after some more testing I’ve discovered that Ansible isn’t actually restarting the containers, but was reporting that the task changed even though nothing changed. There was a small period of time I was running and re-running it where it seemed to settle down and stop reporting that the task changed, but I just tried it a couple of times and it’s started doing it again.