Ansible should support its own official build with the Docker Hub
Registry.
This was originally posted as an issue on github:
https://github.com/ansible/ansible/issues/8212
The summary is copied below:
Ansible is an excellent tool for orchestrating containerized
systems. Unfortunately, making sure that the host machine has
all of the required dependencies installed to run the range of
modules can be tricky. In particular, there are some modules
that rely on python modules that may not be available on the
host's distribution, and so running a virtualenv is required.
By supporting its own official build, ansible could sidestep
this issue by installing all of the required dependencies
independent of the host's distribution. Moreover, this could
allow hosts to upgrade Ansible more rapidly instead of waiting
for a new release on the host's distribution.
The end result would be to be able to run commands like:
docker run --rm -v /path/to/my/data:data:ro ansible:1.7
ansible-playbook -i production site.yaml
This would run `ansible-playbook -i production site.yaml` with
ansible version 1.7 in a read-only copy of the directory at
`/path/to/my/data` in an environment that has all of the module
dependencies like docker-py, python-keyczar, pymongo, urllib,
etc.
Another side effect of this would be to be able to lock-down
version numbers for all of the modules ansible depends on that
are outside of ansible's control, reducing the potential bug
footprint.
Included on the github issue is an example file that builds ansible with
all of its module dependencies on CentOS 7.
@mpdehaan commented that Ansible could provide an image, but that it
would have to exist for every OS combo for using ansible as a
provisioner. I don't think this is the case; all that matters is that
the host machine can run docker images. It doesn't matter to the user
what operating system the ansible image is based on. In the example, I
used CentOS 7 because it is a stable choice that is unlikely to cause
issues when upgrading to newer versions of ansible and its modules.
Can I ask what the problem was with “pip install ansible” ?
I read it as please bless an image. Users these days. :\
Everybody is getting BeOS!
The main reason I got the Imagine Number 9 - 128bit. Good times!
ToBeReplaced, <rant>...snip...</rant> I feel you should visit
http://goo.gl/5mgtWr with your commercial request.
`pip install ansible` works for the base case, though it has a small
risk of failure if anything changes underneath you like pycrypto.
For a more typical case, you also need to install other modules, and
consequently, system libraries. For example, `yum install
postgresql-devel` and `pip install psycopg2`. Or worse, ansible
requires a version of a python module that isn't available on the
control machine's operating system (ex. `docker-py`).
The domain for breakage gets large quickly. As an example, you could
look at how many Github Issues revolve around out-of-sync versions of
Docker and docker-py.
The goal is to establish, in a platform independent way, that "this will
work". Since it's a fair amount of work to actually set up all of the
dependencies to have a host machine that can provision anywhere, and I
already did that work, I am trying to give that back that to the
community in the hopes that the community would help keep it up to date.
I do not have a commercial request at this time. I'm happy to continue
using the posted image in the absence of any action from Ansible.
While not aware of it, you’re making a great argument for OS packages, which have worked fantastically for years.
They are awesome and are what distributions are for.
For Enteprise Linux variants (RHEL, CentOS, etc), configure EPEL, and just “yum install ansible” and it brings in all binary deps, working and happy. It’s in Fedora without configuration.
For Ubuntu, we offer our own PPA now, controlled by us and always containing the latest release.
Golden images bring questions of security updates, extra bloat, and also require someone to adopt Docker when they may not want to.
Agreed entirely.
There are still some issues with the OS package approach though. For
example, through EPEL, you can get docker-py version 0.2.3. However,
ansible's docker modules require docker-py 3.x.
Consequently, if you are on RHEL, you cannot use ansible to manage
docker containers without violating security standards by installing pip
and upgrading docker-py. Moreover, it's not clear what upgrading the
system's version of docker-py would do in the context of the broader
system.
Maybe that's okay, and deciding that the bleeding-edge of ansible is
best left to those willing to take those risks is a reasonable approach.
I think that a docker image is one way of exposing the bleeding-edge to
the end user while eliminating those risks, but it may just not be worth
the effort involved.
The problem that Docker is moving too fast, requiring a new version of a Docker module is to manage Docker is a reason to use more Docker?
Docker is interesting for specific use cases, but I think this is going a bit far.
Most other libraries have very good versions in EPEL, and even if we produce an image, it’s likely to change again tomorrow.
Which is why images aren’t a good approach - you have to keep updating them.
I would argue there really aren’t any risks, you do need a recent version of a docker library, but it’s either going to work or it’s not.
Anyway, using docker to fix a docker problem seems a bit meta.