configuration organization

I suspect this question comes up from time to time, but I’m struggling to wrap my brain around how we should organize our playbooks. We have a few different types of configuration:

  • role - run on regular time intervals - configuration that describes a node.
  • deployments - run on a as needed basis.
  • adhoc - run on a as needed basis – plays that are more complex than a simple ansible -m execution.

I get the role based configuration model and associated configuration layout. It’s the easiest to grok and is similar to how other config systems manage groups of nodes. I have setup a pull workflow that determines a node’s role and executes it’s associated role playbook. We execute this randomly in 10 minute intervals, but we can also force an execution via the push model.

Ok, so that’s all fine and well, but I don’t know what the best practice is for the other two types of playbooks. One approach that seems somewhat reasonable is to have tasks defined outside of main.yml for specific roles (or common if that applies). I just feel like I’m missing something or thinking too hard about this.

-Dan

Deployment based playbooks typically take on the pattern of having some steps execute on localhost, and then include the configuration playbook at the bottom, using the “add_host” trick.

Copying a bit from the AWS deployment guide on docs.ansible.com:

# Use the ec2 module to create a new host and then add
# it to a special "ec2hosts" group.

- hosts: localhost
  connection: local
  gather_facts: False
  vars:
    ec2_access_key: "--REMOVED--"
    ec2_secret_key: "--REMOVED--"
    keypair: "mykeyname"
    instance_type: "t1.micro"
    image: "ami-d03ea1e0"
    group: "mysecuritygroup"
    region: "us-west-2"
    zone: "us-west-2c"
  tasks:
    - name: make one instance
      ec2: image={{ image }}
           instance_type={{ instance_type }}
           aws_access_key={{ ec2_access_key }}
           aws_secret_key={{ ec2_secret_key }}
           keypair={{ keypair }}
           instance_tags='{"foo":"bar"}'
           region={{ region }}
           group={{ group }}
           wait=true
      register: ec2_info

    - debug: var=ec2_info
    - debug: var=item
      with_items: ec2_info.instance_ids

    - add_host: hostname={{ item.public_ip }} groupname=ec2hosts
      with_items: ec2_info.instances

    - name: wait for instances to listen on port:22
      wait_for:
        state=started
        host={{ item.public_dns_name }}
        port=22
      with_items: ec2_info.instances
So that play provisions your nodes, but it also adds them to a dynamic group called "ec2hosts", so your playbook can have a second play

- hosts: ec2hosts
  roles:
     - configure_me
     - roles_go_here

So the hosts that get provisioned can be configured at the end of the run, just by applying some roles.

As for your other playbooks, I'd still recommend using roles, because they provide better organization.  For instance you might have a role that is "rotate_ssh_keys", or something like that.