I’m curious as to how you guys handle the issue with dynamic hosts such as the instances started up on AWS?
This is a similar issue I have with using Puppet. Chef tends to work better since there’s a way to designate roles using the startup data. But as Ansible uses a “static” hosts list or requires the use of an external hosts script (similar to Puppet), how do you guys deal with a newly started instance?
I would hate having to manually update the hosts file every time a new instance is started up.
Some ways that I’m thinking about:
an external hosts script that grabs a list of all instances via AWS API and use security groups as roles/classes.
Somehow have the startup data indicate the roles and have that file written to a file on the instance. However, how would Ansible go about retrieving this information in the first place?
Every instance that gets started publishes its role (via startup data or security group) and post it somewhere else such as a S3 bucket or via the use of some directory service.
I’m assuming people will either do external hosts scripts or have something that generates their inventory.
The provisioning tool should probably write to something an external hosts script could read, unless you want to query AWS, which you could probably do as well.
I’m planning on writing some scripts that query the bash util “ec2-describe-tags” and cross-referencing it with “ec2-describe-instances” and sort the output into a basic hosts file for Ansible. They probably won’t be extremely robust (mainly just tagging for role and environment), but I’ll try and post them here once they’re put together.
As of yesterday, there is an external inventory script specifically for EC2 in the devel branch under examples/scripts. Please give it a try and post any feedback you have.
If you script does more than this, please feel free to add to it and post a PR.
I am working on the documentation for it as well, so that will be available shortly.