I can see the benefits of using the ec2.py dynamic inventory script,
but I run into some issues which I can't figure out how to fix.
1. The ec2.py requires the credentials to be available as environment
variables. But my deployment only has them available inside a vaulted
vars file (in host_vars/localhost, so they can be used by a previous
ansible role that creates infrastructure). How do people handle the
storage of credentials?
I've set things up so that the AWS instance names are the same as my
old inventory (i.e. proxy1, proxy2, web1, web2).
So, I can successfully ping instances by their name, for instance
using this syntax:
instances I create, let's say 'Type=proxy'.
Using the dynamic inventory I can target those instances by using
'tag_type_proxy'.
This then means I will have to use 'group_vars/tag_type_proxy' to put
any specific vars for this group of hosts.
However, this deployment is going to be used for both AWS and non-AWS
infrastructure.
I could just symlink:
group_vars/proxy => group_vars/tag_type_proxy
Is that something that people do?
Alternatively I could rewrite the ansible code to expect
'group_type_proxy' instead of 'proxy' but that seem convoluted...