docker module is not upgrading my container

My understanding is if I have a playbook that says

  • hosts: localhost
    sudo: yes
    tasks:

  • name: ensure redis container is running
    docker: image=dockerfile/redis name=redis command=" bash -c ‘redis-server /etc/redis/redis.conf’"

  • name: ensure web container is running
    docker: image=snazzy/cyweb ports=5000:5000 links=redis:redis name=cyweb

Then if the image for cyweb gets updated with a new version shouldn’t the ansible/docker module replace my container?

Even if I explicitly give it a tag. I tell it HERE, goto this version.

  • name: ensure redis container is running
    docker: image=dockerfile/redis name=redis command=" bash -c ‘redis-server /etc/redis/redis.conf’"

  • name: Kill old container
    docker: image=snazzy/cyweb ports=5000:5000 links=redis:redis name=cyweb state=absent

  • name: ensure web container is running
    docker: image=snazzy/cyweb:3.0 ports=5000:5000 links=redis:redis name=cyweb

It still does not update.

The only thing I found that works is if I kill the container first.

  • name: ensure redis container is running
    docker: image=dockerfile/redis name=redis command=" bash -c ‘redis-server /etc/redis/redis.conf’"

- name: Kill old container
docker: image=snazzy/cyweb ports=5000:5000 links=redis:redis name=cyweb state=absent

  • name: ensure web container is running
    docker: image=snazzy/cyweb ports=5000:5000 links=redis:redis name=cyweb

Here is my version info

Docker:
Client version: 1.2.0

Client API version: 1.14
Go version (client): go1.3.1
Git commit (client): fa7b24f
OS/Arch (client): linux/amd64
Server version: 1.2.0
Server API version: 1.14
Go version (server): go1.3.1
Git commit (server): fa7b24f

Ansible:
ansible 1.7.1

Docker-py

Not sure how to get this. I looked for the file:
/usr/share/ansible/cloud/docker

And it says 1.4 at the top.

Can you please look through the list of existing docker tickets and see if one reflects what you are asking, or if not, please open a new ticket?

Thank you!

I think this is an issue already reported. :(. Maybe I can work around it with a state=absent task to shut down whatver is there first. But now I’m doing ansible’s job, right :)?

I added a comment there. I think it is the same problem I’m describing.

https://github.com/ansible/ansible/issues/8905

My understanding is if I have a playbook that says

- hosts: localhost
  sudo: yes
  tasks:
  - name: ensure redis container is running
    docker: image=dockerfile/redis name=redis command=" bash -c
'redis-server /etc/redis/redis.conf'"

  - name: ensure web container is running
    docker: image=snazzy/cyweb ports=5000:5000 links=redis:redis name=cyweb

Then if the image for cyweb gets updated with a new version shouldn't the
ansible/docker module replace my container?

Even if I explicitly give it a tag. I tell it HERE, goto this version.

  - name: ensure redis container is running
    docker: image=dockerfile/redis name=redis command=" bash -c
'redis-server /etc/redis/redis.conf'"

  - name: Kill old container
    docker: image=snazzy/cyweb ports=5000:5000 links=redis:redis
name=cyweb state=absent

  - name: ensure web container is running
* docker: image=snazzy/cyweb:3.0* ports=5000:5000 links=redis:redis
name=cyweb

From your comment in the next section, I wonder if the example above isn't

supposed to have the "Kill old container" task?

It still does not update.

The only thing I found that works is if I kill the container first.

  - name: ensure redis container is running
    docker: image=dockerfile/redis name=redis command=" bash -c
'redis-server /etc/redis/redis.conf'"

* - name: Kill old container*
* docker: image=snazzy/cyweb ports=5000:5000 links=redis:redis
name=cyweb state=absent*

  - name: ensure web container is running
    docker: image=snazzy/cyweb ports=5000:5000 links=redis:redis name=cyweb

Assuming that:

1) The kill old container task isn't supposed to be present in the second
example
2) The snazzy/cyweb container is running a long running process
3) Manually running docker's tools does something similar

I think this behaviour is intended.

This seems like we're just asking the docker module to guarantee that the
service is running. Not to restart the service. I think we need to be
explicit here that we aren't just saying "ensure service is running" but
are actually saying "restart the service, picking up a new version if
that's deployed".

-Toshio

Assuming that:

  1. The kill old container task isn’t supposed to be present in the second example

  2. The snazzy/cyweb container is running a long running process

  3. Manually running docker’s tools does something similar

I think this behaviour is intended.

This seems like we’re just asking the docker module to guarantee that the service is running. Not to restart the service. I think we need to be explicit here that we aren’t just saying “ensure service is running” but are actually saying “restart the service, picking up a new version if that’s deployed”.

-Toshio

Sorry I’m slow to understand Ansible. But the docker module is suppose to guarantee that the service is running, correct. Shouldn’t it ensure that the VERSION of the service I specify in the playbook is running?

How does ansible handle this for other applications besides docker? I will need to go look at that. Won’t ansible let me say I want “version 1.0 of clock app”, “version 1.5 of snogger app”. And if the server has “.09 of clock app” and “1.0 of snogger app”, it should change them to the new version, correct?

tnx

Conceptually, containers seem somewhere between applications and
hosts. Within the application case, let's think about services in
particular. You can specify to a module like yum to update a package
to a specific version. However, ansible doesn't ensure that the
service that is shipped with that package is restarted after the
update. Most of the time the package system itself will do that but
sometimes it won't (For instance, if the updated service needs new
data [changed config file format or old user entered data] and no
automatic data conversion is possible). In the latter case, you'd
have to specify a restart as a separate ansible task.

For hosts, a restart is definitely a separate task from an update.
You might install a new kernel version immediately upon release but
not want to restart the server until a scheduled outage window, for
instance.

Since it sounds like using docker manually would also require you to
first specify that the old container is shutdown and then that the new
container is created, I think that we probably want the same behaviour
from the ansible docker module (although we could have another
parameter that tells ansible to restart the container as well).

OTOH, I missed the fact that you had specified name in your playbook
before. Tracing through the code when that's specified, it looks like
the present code wants to destroy the old container and crate a new
one in that case; it's just buggy because it relies on the image's
symbolic name (which can be changed to reference a different image)
instead of its id.

So now I'm not sure if we should just fix this bug and then implement
a parameter that says "Do not restart the running container"...

-Toshio

Thanks for that explanation, Toshio. That helps things a lot. And thanks for tracing through the code too!

If this is just a bug, then a fix would be nice. What I was hoping for is for the docker module to detect a new version is available. Then I could do a dry-run and check if any of my docker applications can be upgraded by looking at the ansible output.