How do you structure Playbooks to Handle Managing Many Applications with Many Environments?

Hey guys,

I hope this is the right place to ask for advice! If not, let me know and I’ll mosey along

I would recommend having different inventory files for different environments and structure everything else withing those files according to your needs.
This way a user always has to specify at least the environment on the cli.
Any futher limiting can be done via the --limit parameter and you should structure you groups (and maybe groups of groups) so that your users can easily limit them as needed.

What we do is this:

Playbooks

Each host must contain a list variable called (e.g.) apps_list in its host_vars file. This will usually be just one application, but could be more that one if you install multiple apps on one server (e.g. different application DBs on a single DB server).

There is one single playbook that acts as the entry point for everything; let’s call it “run.yml”. Hosts (or host groups) are passed in via extra_vars. The first “hosts” block iterates over each host, then loops over the application name(s) in the apps_list, and adds that host to a dynamic group of the same name.

All application playbooks are then listed and using the include directive. Each application playbook uses a hard coded “hosts” group name that exactly matches the dynamic groups names that (potentially) were created by the opening host block.

Each included application playbook only executes if there is a matching host group present, in scope, with any hosts in it.

Let me show you what I mean.

`

My opinion is that there should be only one inventory. To achieve this we have an inventory with multiple axes, so that every host is member of a few groups (typically for the running app and the environment). This still requires “duplicated” playbooks, because the hosts must be explicit there, but allows you to manage the “real” playbook in an separate file. With this approach, the playbooks look like

  • hosts: production:&database
    include: database.yml

John McNulty:

That is clever, I really dig that! We handle the puppet deployments (Which I’m currently replacing, one by one) in a very similar manner, so just porting that process over makes a lot of sense. Do you just maintain all of that information in flat files on disk, or in a database somewhere? I would fear that as the infrastructure grew it would become difficult to manage – granted that information has to live somewhere. I would need to support multiple instances/installs of an application, but I think that could be doable…

You’ve kind of blown my mind, I need to think all that over a bit.

As far as sticking my foot in the water, believe me I want to take Puppet out of the picture ASAP – it’s the rest of the team that needs convincing. My plan is to carve away at Puppets responsibilities until I can cut it out entirely. Thanks for the tip about Collins, too! I had actually been checking it out yesterday as a replacement. The teams inclination is to roll our own, but as we have a lot of work to do with limited resources I’m trying to push us away from that.

Dammit man, now I have to re-architect all my Ansible code :stuck_out_tongue:

For now, it is only on the inventory itself. I use a directory for
inventory, so that each individual group has its own file. That implies
less management overhead that a single file, and makes very easy add new
nodes just by dropping their names in the proper files (although removal is
not so straightforward). For large deployments a database (I prefer
LDAP) could be good permanent storage, but in our case we are not on that
way yet, because the inventory is not too hard to construct from scratch
using node names and some EC2 tags that we define.

Javier Palacios