Can't run playbook against multiple nodes using ec2.py and AWS tags

Hello,

I’m using the Amazon Dynamic inventory script from here. I’ve created some instances on EC2, and have given all of them a custom tag called Purpose, with a value like NodePurpose, and instance names Node1, Node2 etc.

I’ve written a playbook with:

Well…
do you nodes have the same ssh key?
is ubuntu your remote_user ?
is ~ubuntu/.ansible writable/readable?

Better yet, can you show us your playbook and the task it is failing on?
Best,

The nodes have the same ssh key, and the user is correct. In fact, when I try the playbook with --limit, and apply it on one node at a time, it works correctly. It’s only when I group nodes using a custom EC2 tag, that this happens.

Also, this has happened with several playbooks, and always during the “setup” task, before any of my own tasks. To demonstrate this, I wrote this ultra-simple playbook:


  • hosts: tag_Purpose_NodePurpose
    gather_facts: True
    become: yes
    tasks:
  • debug: msg=“This is just a demonstration”

It still happens. I got the following output (some stuff redacted):

http://pastebin.com/m2Lu8yQC

I also ran the same playbook ONLY on the node that failed above, and it worked correctly.

I believe I figured out the problem. I had set my control_path to this value in ansible.cfg:

[ssh_connection]
control_path = %(directory)s/%%r

Which didn’t include %h (ie hostname), so connections overwrote each others’ control sockets (by default, ansible seems to re-use SSH connections as decribed here).

I tried both adding %h to the control path, and disabling multiplexing (ControlMaster=no), and both succeeded.

So, kids, include at least %h, %p and %r in the control path, as the man page suggests, so that connections don’t get mixed up.