I am having a weird issue when launching ec2 instances. I have a playbook where I launch an instance into a private VPC, using item.private_ip etc.
My ssh config is setup to proxy requests to any host in this subnet. I can ssh successfully into any host, using my bastion and ssh config, so I know this is working correctly.
When I am launching an ec2 instance, my wait_for local action always times out, it can’t connect “msg: Timeout when waiting for ”, however, after the playbook fails I can ssh into this host, and adding this host into my inventory file also works.
That makes sense since wait_for does not have direct access (it just
uses a socket call), you might need to delegate the wait for to the
bastion machine.
I have tried delegating the wait_for, and with debug on, I can see that the task is being run on the bastion host. I am still having the same issue.
While it is stuck on the wait_for task, i can ssh from the bastion to the new instance, so I know it’s not a timing issue, and as soon as the task fails, I can ssh directly from my ansible host.
I can get around the issue by removing the wait_for and use a pause task, but i’d like to know what i’m doing wrong.