Ansible Sync Remote Server A to Remote Server B

Need some advice regarding remote to remote syncing

I need to be able to run an ansible playbook from my local machine. And it sync a folder from Remote Server A to Remote Server B

I found several solutions online but all of them failed on the same issue.

From what I can see from the error

It looks like it’s using my local machine username instead of the ansible credientials I have in my ansible hosts which it uses on all other playbooks.

I need to run this from my local as it will be part of a provisioning new servers,one really big job combining 4 separate roles. The other 3 work perfectly.

Some examples I’ve tried
Example 1

  • name: Sync Pull task - Executed on the Destination host “{{Migration-A}}”
    hosts: Migration-A
    user: ansible
    tasks:
    • name: Copy the file from Migration-B to Migration-A using Method Pull
      tags: sync-pull
      synchronize:
      src: /home/
      dest: /home/
      mode: pull
      delegate_to: Migration-B
      register: syncfile
      run_once: true
Example 2

---
- name: Copy Directories from Server A to Server B
  hosts: Migration-A  # Replace with the actual hostname or group of Server A
  become: yes     # You may need to escalate privileges to read the source directories

  tasks:
    - name: Copy /home/ from Server A to Server B
      synchronize:
        src: /home/
        dest: /home/
      delegate_to: Migration-B  # Replace with the actual hostname of Server B
      become: yes  # You may need to escalate privileges to write to the destination directory

Assuming the target (inventory_hostname) is A, the copy target is B and the controller is C.

Most solutions you’ll have found won’t work with become: yes, you can only use become on the target server, but no the ‘other server’, so using things like rsync+ssh://user@B:/home/ (where user is not root) won’t work. Even though that is probably the most common solution for sync from A to B while controller is C.

Delegation just changes the ‘target’ and which means that your play instead of doing a sync from C to A does C to B. There is no existing solution with the synchronize action that will handle a 3 way + become. You can at best do sync A to C and then C to B within a playbook.

Other alternatives:

  • Install and run rsync server (runs as root) on B and use synchronize from A with dest: rsync://user@B:/home
  • Setup /home as network share (if NFS, no root squash) and use synchronize with ‘local paths’
  • Enable a root login from A to B and then dest: rsync+ssh://B:/home
3 Likes

Don’t know the reason why this task has to be done, but maybe using a share on the servers may be a solution?

As @bcoca provided some good solutions also. I would not see Ansible as a sync service tho. Move files around sure, but not a sync service.

robocopy or rsync would be a better solution, or work with shares. Use ansible to trigger robocopy on the remote etc is another thing I can get behind tho :slight_smile:

1 Like

Thankyou @bcoca coca & @it-pappa

i would usually just use Rsync manually to sync the servers.

im in the process planning and testing migrating several production servers from Centos 7 to AlmaLinux 9

i have the playbooks written and tested for the base setup of the servers. creating the users adding ssh keys, building the lvm home pool, installing nginx php-fpm, composer, symfony etc

so the last part is to sync the /home/ and a couple of config files from old prod to new prod.

i had read that ansible had a sync feature so wondered if i could finish the whole task with just ansible. this way i could easily reuse for each server.

but if A B C setup wont work then i will just setup rsync manually to copy B to A from C like i used to do.

Doesn’t the synchronize module use rsync?

It does, but it emulates rsync from the command line as the login user, which creates a temporary rsync server on the ‘target’ but for ‘root’ to work either you login as root to the target (normally disabled) or you run rsync as a service there (as root).

You normally use rsync + root to preserve permissions on the files copied, when they are not the same user/group as the user doing the copying.

Ansible can put a lot of scaffolding around actions but it does not prevent the need for root access in some cases, become helps doing normal escalation on a target, but it won’t work in this case as they have 2 targets for one action and become is not designed for this … nor is rsync.

Have you tried scp -3? it routes the traffic through C but does not write to disk so probably faster for new copies than rsync.

2 Likes

Also, I already pointed it out in the original post, Ansible CAN copy from A to C to B, just needs 2 synchronize tasks.

Hi @KMRSolutions! It looks like the post might be solved - could you check to see if the response by @bcoca worked for you?

If so, it would be super helpful if you could click the :heavy_check_mark: on their post to accept the solution - it helps users find solutions (solved topics have a higher search priority), recognises the input of the people who help you, helps our volunteers find new issues to answer, and keeps the forum nice and tidy. It’s just a nice way to give back, and only takes a moment :slight_smile:

Thanks!
(this is template reply, do feel free to reply if I’ve misunderstood the situation!)

I wouldn’t sync it by using ansible I would use git.
If a play is ready to run I push it to a central git server. Any server(s) that you want to use the same play can do a git clone and execute the play with ansible.
The additional benefit is that as soon as you have any update made to the play you don’t need to bother pushing it to all your ansible instances

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.