127.0.0.1 and localhost in synchronize module

Why is localhost and 127.0.0.1 a special case in the synchronize module?

Given the nature of rsync and what it does, your question can be read a few ways but let me guess at what you are getting at.

127.0.0.1 and localhost are treated specially because you're already on that box so you don't need to provide all of the SSH credentials. You can do a sync that doesn't go out to the network at all.

<tim/>

Sorry for bringing up a (somewhat) old topic, but what about the use case where Ansible is used to configure a Vagrant VM? In that case the VM would be accessible through 127.0.0.1:2222, and thus get hit by the special handling in the synchronization module. This means if you want to write to a path where the unprivileged/‘vagrant’ user doesn’t have write permissions, you will have to add the non-obvious ‘rsync_path=“sudo rsync”’ to your task configuration, instead of just adding ‘sudo: True’ like everywhere else in order to get the same result.

I don’t know if there’s a good way to detect situations like this so ‘sudo: True’ can have the proper effect. If not, then I think it might be worth mentioning explicitly in the documentation for the module.

Ansible views Vagrant as no different than any other computer.

I believe it should set ansible_ssh_port in inventory.

If you can paste what kind of errors you are seeing and your ansible version it might be more clear what you’re talking about, but I don’t understand “get hit by the special handling” means in this particular case exactly.

More information would be useful.

Here’s what I do to gain some other benefits:

  • I let Vagrant dynamically generate the hosts file that is later used by Ansible.- the Vagrant boxes use a different subnet thus not conflicting with the corner case described above.

Dan, this is far from a corner case. Vagrant’s default behavior is to setup the ssh address to: 127.0.0.1:2222.

Michael, thanks for your hard work on Ansible! It’s not that it treats Vagrant boxes differently, synchronize treats localhost and 127.0.0.1 differently, but maybe it shouldn’t unless ansible_connection=local is set. I posted extended logs and information on my inventory file and task in https://github.com/ansible/ansible/issues/5240#issuecomment-72944174