synchronize module connects to local machine as root before calling rsync

I am trying to use the synchronize module and I notice that it first
connects as root at the local machine before calling rsync. However,
root user's ssh key is not set in the authorized keys of the remote host
root user, so it prompts for password, which causes ansible-playbook to
hang. I don' t want to rsync from the local machine as root user. I want
to rsync as the current user when invoking ansible-playbook, which Ι
think should be the expected behavior. Am I missing something here?

Ansible version: 1.6.1

Well, I worked around it by setting ansible_connection to 'local' for
localhost. However, I think it should also work with 'ssh' by connecting
as the current user (not as 'root').

There’s a degree of fuzzy logic in the synch module to understand localness, but it’s not perfect — it was actually pretty surprising how complex “guessing” the right params to sync needed to be.

To understand this further though, can we please see the lines from your playbook that you are using? This can make sure we’re talking about the same things.

Thanks!

The playbook looks like this: - hosts: “{{ target }}” user: root roles: - somerole and the synchronize task in somerole looks like this: - name: Synchronize some dir synchronize: src=relative/path/to/some/dir dest=/full/path/to/some/dir delete=yes recursive=yes {{ target }} is always a remote machine. If ‘ansible_connection’ is NOT set to ‘local’ for the localhost (actually, if it is not set at all), the synchronize module connects to the local machine through ssh as ‘root’. Is this because of the “user: root” in the playbook? If this is the case, it is wrong because “user: root” is actually a synonym for “remote_user: root” and synchronize should not pick the “remote_user” when connecting on the “local” machine. Besides the above, another issue is whether it is good default behavior to connect to localhost through ssh if ‘ansible_connection’ is undefined. Maybe ansible should use the ‘local’ connection when connecting to localhost unless any of the ‘ansible_connection’ configuration option or the ‘connection’ playbook parameter is set to a different value.

“Besides the above, another issue is whether it is good default behavior to connect to localhost through ssh if ‘ansible_connection’ is undefined”

It depends if localhost is in inventory or not, but there is nothing to presume an explicit entry of localhost has requested a particular way to connect.

But it is indeed presumed to connect through ssh, probably because this is the default for all hosts with undefined ansible_connection/connection. So, I am thinking that perhaps it would be better default behavior to make a sensible exception for localhost and choose ‘local’ connection (when ‘ansible_connection’ is undefined and ‘connection’ in the playbook not set).

I just ran into this same issue. As alluded to above, adding this line to my inventory resolved the issue.

localhost ansible_connection=local

In my case, I was passing an explicit user and private-key on the command line which were not set up on my machine. Should there be a note added to this module about the situation of trying to SSH to the local machine? I like how this module is trying to hide some of the complexity of it actually being a local command run but delegated to localhost but this ended up being a difficult thing to troubleshoot.

No, we don’t add documentation in this project when the software can be made to do the right thing. If localhost needs to know to use the local connection here, this can be dealt with amid the various fuzzy logic already in the synchronize plugin.

Ok, well I did track down the issue into the delegate code in the core runner. By forcing the transport to local, this resolved the issue. It appears all the delegate logic deals with hostvars to make the determination on transport to use. I spent some time digging into the action plugin and couldn’t figure out if you can override hostvars there or if that would have other residual effects. This also could be an issue with the core runner since I believe when you delegate to localhost that it should force a local transport.

One thing I forgot to note was the synchronize plugin worked great when working with vagrant. In that case, the connection was properly set to local when running the synchronize plugin. Since vagrant runs on localhost, maybe the fuzzy logic worked correctly here. I’ll see if I can dig in some more but the override in the inventory did the trick in my case which presumably is due to the transport in hostvars being properly set.

For local actions, as well as for the local part of a synchronize task, Ansible should force the connection to local only if it has not been explicitly set by the user (by having set the ‘ansible_connection’ hostvar). However, this requires that the current implementation allows to determine whether an option’s current value has been set explicitly by the user or set by code due to a default policy. I am not sure if that distinction is currently possible.

We’re using a turing complete programming language.

It’s possible. Just needs to be done.