I don’t know whether this is deliberate or not, but I cannot get the synchronize module to route to my computers, when all other modules are content.
All of the controlled hosts are behind firewalls that I do not control, so access is via a reverse ssh tunnel. The inventory file has entries like this:
host_1234 ansible_host=localhost:1234 ansible_user=username host_key_checking=false
the ansible_ssh_common_args are set in group_vars/all as:
ansible_ssh_common_args: '-o StrictHostKeyChecking=no -o ProxyCommand="ssh -W %h: -q jumphost"'
where jumphost
is where the controlled computers log into and set up reverse ssh proxies.
Commands such as:
ansible ping -i hosts host_1234 -m ping host_1234 | SUCCESS => { "changed": false, "ping": "pong" }
work as expected. However, the embedded rsync command in the synchronize module cannot resolve to the target host. It constructs the wrong rsync command:
ansible -vvv -i hostfile host_1234 -m synchronize -a "src=/var/log/daemon.log dest=daemon"
, creates:
/usr/bin/rsync --delay-updates -F --compress --archive --rsh=/usr/bin/ssh -S none -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null --out-format=<<CHANGED>>%i %n%L /var/log/daemon.log [username@localhost:1234]:daemon
A working rsync command is:
rsync -avz -e "ssh -A jumphost ssh -p 1234" username@localhost:/var/log/daemon.log daemon/
Is this a bug or a missing feature? Is the issue with ansible’s parameter construction, or synchronize’s use of them?
Tim