When I provision a new EC2 instance and then try to connect using the IP or DNS name, I get the error message above.
When I call the same configuration play using the DNS name in the inventory file it works fine.
More details:
When I call a playbook using:
ansible-playbook -i provision provision.yml -e “dc=eu-ireland”
it creates an instance, waits for SSH to come up, but then fails on the config play with:
fatal: [ec2-.eu-west-1.compute.amazonaws.com] => {‘msg’: ‘FAILED: not a valid DSA private key file’, ‘failed’: True}
but when I add “ec2-.eu-west-1.compute.amazonaws.com” to the webservers (inventory file) and run:
ansible-playbook -i webservers webservers.yml
There’s no difference from where the system comes in this case, whether in inventory or static, FWIW.
I strongly recommend not editing the inventory file via Ansible and using the dynamic source.
Sounds like you need to set the pem file via the right parameter (–help) and might need to be calling something via local_action but I haven’t seen the whole playbook so hard to say for sure.
Thanks for the input Michael. I’m not sure where the issue is, but when I call both playbooks with --private-key=/path/to/key the provisioning one fails, and the configure one works. The same is true when I set the pem file via ansible.cfg.
The part that confuses me is that the host is correct in the -vvvv output - ie I see the SSH command going to the correct hostname, so I don’t really understand why the host group wouldn’t work but when I pull it from an inventory file it would.
I set up a blank-slate config management server, installed ansible from scratch and it exhibited the same behaviour.
This is the whole playbook: http://pastebin.com/9u4FUtXJ
common is empty, and webtier has any command you want that works in it, right now it’s just: