Double provisioning when running Ansible locally

Hi,

I’m using Ansible to deploy to a single machine. I run Ansible locally on the target machine and I’m collecting the inventory from a script that returns JSON. I run everything from Vagrant but the actual command line being run on the target machine is:

$ ansible-playbook -i provisioning/inventory provision.yml --connection=local --limit=staging

The playbook works but is run twice on the same machine: once with the hostname and once with the IP, as shown with a sample task below:

==> staging: TASK: [front-web | start apache2 service] *************************************
==> staging: ok: [staging.example.org]
==> staging: ok: [62.xx.xx.xx]

And in the end:

==> staging: PLAY RECAP ********************************************************************
==> staging: 62.xx.xx.xx : ok=278 changed=1 unreachable=0 failed=0
==> staging: staging.example.org : ok=278 changed=1 unreachable=0 failed=0

The issue does not happen when I run Ansible from a control machine: only “staging.example.org” gets provisioned, which is the desired behavior.

Here’s a stripped JSON produced by the script.

{
“local”: {
“hosts”: [
“example.local”
],
“vars”: {
“hostname”: “example.local”,
“ansible_ssh_host”: “192.168.42.42”
}
},
“staging”: {
“hosts”: [
staging.example.org
],
“vars”: {
“hostname”: “staging.example.org”,
“ansible_ssh_host”: “62.xx.xx.xx”,
“ansible_ssh_user”: “ubuntu”,
“ansible_ssh_private_key_file”: “key/staging.example.org.pem”
}
}
}

Any idea what’s causing the “double-provisioning”?

Thank you,
Warren.

I found the issue.

For some reason, ansible_ssh_host triggers this unexpected behavior here. Using the IP addresses as host instead worked fine for me. I rewrote my script to produce the following JSON:

{
“local”: {
“hosts”: [
“192.168.42.42”
],
“vars”: {
“hostname”: “example.local”
}
},
“staging”: {
“hosts”: [
“62.xx.xx.xx”
],
“vars”: {
“hostname”: “staging.example.org”,
“ansible_ssh_user”: “ubuntu”,
“ansible_ssh_private_key_file”: “key/staging.example.org.pem”
}
}
}