ec2.py dynamic inventory and host names

When running playbooks with a dynamic inventory and ec2.py, the host names show as ip addresses... This makes monitoring what ansible is doing, let alone debugging, difficult. Is there any way to change this behaviour?

Thanks.

I believe Peter’s favorite phrase here is “you should treat AWS servers like cattle, not pets”, and you should be able to tell what Ansible is doing by the names of your roles, host groups, and task headers, and I’m a little puzzled why an IP would make debugging difficult.

That being said, I’d like to hear more.

Here’s the relevant info from the default ec2.ini:

This is the normal destination variable to use. If you are running Ansible

from outside EC2, then ‘public_dns_name’ makes the most sense. If you are

running Ansible from within EC2, then perhaps you want to use the internal

address, and should set this to ‘private_dns_name’.

destination_variable = public_dns_name

For server inside a VPC, using DNS names may not make sense. When an instance

has ‘subnet_id’ set, this variable is used. If the subnet is public, setting

this to ‘ip_address’ will return the public IP address. For instances in a

private subnet, this should be set to ‘private_ip_address’, and Ansible must

be run from with EC2.

vpc_destination_variable = ip_address

I'm a little puzzled why an IP would make debugging difficult

Because you have to go fish to find which box the ip addresses correspond to, but you are right it's a bigger issue for monitoring.

Monitoring the just by which roles are applied isn't enough, because it doesn't tell me if that a prod/qa/dev box, and which environment (we typically have several qa environment going on).

I do have public_dns_name setup in ec2.ini as you show.

I'm thinking of going back to text inventories, and have a script (possily using ec2.py) populate the ip addresses into those inventories. Has anybody done something like that before?