{'msg': 'FAILED: not a valid DSA private key file', 'failed': True} when configuring newly created EC2 instances

When I provision a new EC2 instance and then try to connect using the IP or DNS name, I get the error message above.

When I call the same configuration play using the DNS name in the inventory file it works fine.

More details:

When I call a playbook using:
ansible-playbook -i provision provision.yml -e “dc=eu-ireland”

it creates an instance, waits for SSH to come up, but then fails on the config play with:
fatal: [ec2-.eu-west-1.compute.amazonaws.com] => {‘msg’: ‘FAILED: not a valid DSA private key file’, ‘failed’: True}

but when I add “ec2-.eu-west-1.compute.amazonaws.com” to the webservers (inventory file) and run:
ansible-playbook -i webservers webservers.yml

it connects and runs fine.

provision.yml:
http://pastebin.com/9u4FUtXJ

webservers.yml:
http://pastebin.com/k9mCu587

ansible.cgf:
http://pastebin.com/NzF2Lc1Y

OS: CentOS 6.4 on both config and remote machines.

If there is anything else that would help, please let me know.

I changed the task where I was adding the EC2 instance to the host group to this instead:

  • name: Add hosts to webserver inventory file
    lineinfile: dest=/home/ec2-user/ansible/webservers regexp={{ item.public_dns_name }} line={{ item.public_dns_name }} state=present
    with_items: ec2.instances

And everything works fine. I think there might be a bug with running plays against a host group created dynamically from EC2 instances.

There’s no difference from where the system comes in this case, whether in inventory or static, FWIW.

I strongly recommend not editing the inventory file via Ansible and using the dynamic source.

Sounds like you need to set the pem file via the right parameter (–help) and might need to be calling something via local_action but I haven’t seen the whole playbook so hard to say for sure.

Thanks for the input Michael. I’m not sure where the issue is, but when I call both playbooks with --private-key=/path/to/key the provisioning one fails, and the configure one works. The same is true when I set the pem file via ansible.cfg.

The part that confuses me is that the host is correct in the -vvvv output - ie I see the SSH command going to the correct hostname, so I don’t really understand why the host group wouldn’t work but when I pull it from an inventory file it would.

I set up a blank-slate config management server, installed ansible from scratch and it exhibited the same behaviour.

This is the whole playbook: http://pastebin.com/9u4FUtXJ
common is empty, and webtier has any command you want that works in it, right now it’s just:

nano roles/webtier/tasks/main.yml