SQL migration playbooks questions

Only loop if you want to generate more then one key pair, I think your error might be because you have something like this:

    - name: Create SSH keypair in /tmp
      ansible.builtin.openssh_keypair:
        path: /tmp/id_rsa_remcpyusr
        type: rsa
        force: true
        comment: "remcpyusr"
        delegate_to: localhost
        run_once: true

When you should have:

    - name: Create SSH keypair in /tmp
      ansible.builtin.openssh_keypair:
        path: /tmp/id_rsa_remcpyusr
        type: rsa
        force: true
        comment: "remcpyusr"
      delegate_to: localhost  # 👈 note the leading white space
      run_once: true

ansible-lint is handy for catching some errors like this.

3 Likes

Yeah got it - i had some spacing issues but this is something cht gpt can do quite good, but have to tell to not change anything else or it will do scary stuff!.
grafik

---
- name: Generate SSH keypair on remote hosts and distribute keys
  hosts:
    - 192.168.19.201
    - 192.168.151.237
    # - localhost
  become: true
  tasks:
    - name: Create a "sync_src" group
      ansible.builtin.group_by:
        key: sync_src
      when: ansible_default_ipv4.address == '192.168.151.237'

    - name: Create a "sync_dst" group
      ansible.builtin.group_by:
        key: sync_dst
      when: ansible_default_ipv4.address == '192.168.19.201'

- name: Create SSH keypair in /tmp
  hosts: localhost
  tasks:
    - name: Create SSH keypair in /tmp
      ansible.builtin.openssh_keypair:
        path: /tmp/id_rsa_remcpyusr
        type: rsa
        force: true
        comment: "remcpyusr"
      run_once: true

- name: Distribute the SSH public key
  hosts: sync_src
  tasks:
    - name: Distribute the SSH public key
      ansible.builtin.copy:
        src: /tmp/id_rsa_remcpyusr.pub
        dest: /tmp/id_rsa_remcpyusr.pub
        mode: '0644'
        force: yes

- name: Distribute the SSH private key
  hosts: sync_dst
  tasks:
    - name: Distribute the SSH private key
      ansible.builtin.copy:
        src: /tmp/id_rsa_remcpyusr
        dest: /tmp/id_rsa_remcpyusr
        mode: '0600'
        force: yes

now i have to set everything together and create two test sql server and a test workflow - thank you aagin for you extremely valuable help!

1 Like

I got now the full playbook and edited the part with permision settings so it sets just the needen ones and copy the keys directly to the user .ssh folder.

---
---
- name: Generate SSH keypair on remote hosts and distribute keys
  hosts:
    - 192.168.19.201
    - 192.168.151.237
  become: true
  tasks:
    - name: Create a "sync_src" group
      ansible.builtin.group_by:
        key: sync_src
      when: ansible_default_ipv4.address == '192.168.151.237'

    - name: Create a "sync_dst" group
      ansible.builtin.group_by:
        key: sync_dst
      when: ansible_default_ipv4.address == '192.168.19.201'

- name: Create SSH keypair in /tmp
  hosts: localhost
  tasks:
    - name: Create SSH keypair in /tmp
      ansible.builtin.openssh_keypair:
        path: /tmp/id_rsa_remcpyusr
        type: rsa
        force: true
        comment: "remcpyusr"
      run_once: true

- name: Distribute the SSH public key
  hosts: sync_dst
  tasks:
    - name: Distribute the SSH public key
      ansible.builtin.copy:
        src: /tmp/id_rsa_remcpyusr.pub
        dest: /home/remcpyusr/.ssh/authorized_keys
        mode: '0644'
        force: yes

- name: Distribute the SSH private key
  hosts: sync_src
  tasks:
    - name: Distribute the SSH private key
      ansible.builtin.copy:
        src: /tmp/id_rsa_remcpyusr
        dest: /home/remcpyusr/.ssh/id_rsa
        mode: '0600'
        force: yes

- name: Set permissions for private key
  hosts: sync_src
  tasks:
    - name: Set permissions for private key
      ansible.builtin.file:
        path: /home/remcpyusr/.ssh/id_rsa
        owner: remcpyusr
        group: remcpyusr
        mode: '0600'

- name: Set permissions for public key
  hosts: sync_dst
  tasks:
    - name: Set permissions for public key
      ansible.builtin.file:
        path: /home/remcpyusr/.ssh/authorized_keys
        owner: remcpyusr
        group: remcpyusr
        mode: '0644'

- name: Synchronize files to target host
  hosts: sync_dst
  tasks:
    - name: Synchronize files to target host
      ansible.builtin.synchronize:
        src: '{{ item }}'
        dest: remcpyusr@{{ ansible_default_ipv4.address }}:{{ item }}
        rsync_opts:
          - "--rsh='ssh -i /home/remcpyusr/.ssh/id_rsa'"
      delegate_to: '{{ groups["sync_src"] | random }}'
      when: "'sync_dst' in group_names"
      loop:
        - /tmp/test.txt


it seems to have some issue with the key auth, because the sync task is taking its time aleready…
@utoddl should the private key not be on the source host?
I tried to change it but still the copy gets stuck in a loop it seems.
Its a little nerve wracking - i can copy just fine when i add the keys to ~/.ssh/ as root.
I can copy with:

 rsync -avz -e "ssh -i /home/remcpyusr/.ssh/id_rsa" remcpyusr@192.168.19.201:/tmp/test2.txt /tmp/test2.txt

Somehow it doesnt get the path to the private key from the options correct

i ruined it myself because i didnt know this item variable in the loop is just filles with the vars from loop - i removed the variable i set and its still looping araound - i now edited the playbook above with the actual versiuon with rsync - i use now to be able to continue:

- name: Synchronize files to target host
  hosts: sync_src
  tasks:
    - name: Synchronize files to target host
      ansible.builtin.command:  rsync -avz -e "ssh -i /home/remcpyusr/.ssh/id_rsa" remcpyusr@192.168.19.201:/tmp/test2.txt /tmp/test2.txt

Sorry, @gothsome , I’m not sure what that image is supposed to be showing me.

Why do you have separate steps to set the owner and group of the public and private keys instead of setting them when you copy them? Both owner: and group: are options for both ansible.builtin.file and ansible.builtin.copy.

Are you changing from using ansible.builtin.synchronize to ansible.builtin.command? What’s wrong with the “Synchronize files to target host” task from #44? What’s the job log for that task?

BTW, I’ve never seen this pattern where you start another play with different hosts for single tasks rather than using a when: condition to limit the task to the hosts of interest. I keep wanting to dislike it, but it’s actually cleaner in some ways than putting when: conditions on every task. I suppose it comes down to how much overhead is involved with setting up each play in the playbook. Does it do fact gathering on each play, or only on the first play involving a particular host? That’s really interesting, and I don’t believe I would have thought of trying it that way. As the kids say, “Today I Learned …”

1 Like

Thank you - i would like to keeop syncronize, but the creen s show the playbook which stops, because the rsync module cant auth the user correctly - i had this in the beginnig and then i added for testing the awx private key to the source host in ~/.ssh/id_rsa and the module could copy, but i dont like thins for two reasons: its root access and the private key of the awx privkey should just be at the server itself
I learned then that the playbook with this key auth issue schould fail, but it just keeps on looping around all day long in my awxsrv - the problem is i cant see an error where i can see where the auth fails. the log just stopps proceeding at an auth step.
I cant tell if its an bug i schould report at the git?

heres the log of the task i made the screenshot with full verbosity.
I stopped it after 3 hours - i add the last task log because the log has 600 lines.


TASK [Synchronize files to target host] ****************************************
task path: /runner/project/zzz_testplaybooks_and_archive/111movefiles_rsync3.yaml:72
redirecting (type: modules) ansible.builtin.synchronize to ansible.posix.synchronize
redirecting (type: action) ansible.builtin.synchronize to ansible.posix.synchronize
redirecting (type: action) ansible.builtin.synchronize to ansible.posix.synchronize
<192.168.151.237> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.151.237> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o 'ControlPath="/runner/cp/63d63d053c"' 192.168.151.237 '/bin/sh -c '"'"'echo ~root && sleep 0'"'"''
<192.168.151.237> (0, b'/root\\n', b'')
<192.168.151.237> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.151.237> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o 'ControlPath="/runner/cp/63d63d053c"' 192.168.151.237 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo /root/.ansible/tmp `"&& mkdir "` echo /root/.ansible/tmp/ansible-tmp-1718701712.9134433-213-60173677516755 `" && echo ansible-tmp-1718701712.9134433-213-60173677516755="` echo /root/.ansible/tmp/ansible-tmp-1718701712.9134433-213-60173677516755 `" ) && sleep 0'"'"''
<192.168.151.237> (0, b'ansible-tmp-1718701712.9134433-213-60173677516755=/root/.ansible/tmp/ansible-tmp-1718701712.9134433-213-60173677516755\\n', b'')
Using module file /usr/share/ansible/collections/ansible_collections/ansible/posix/plugins/modules/synchronize.py
<192.168.151.237> PUT /runner/.ansible/tmp/ansible-local-2295bqx_n8/tmpokmyow7d TO /root/.ansible/tmp/ansible-tmp-1718701712.9134433-213-60173677516755/AnsiballZ_synchronize.py
<192.168.151.237> SSH: EXEC sftp -b - -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o 'ControlPath="/runner/cp/63d63d053c"' '[192.168.151.237]'
<192.168.151.237> (0, b'sftp> put /runner/.ansible/tmp/ansible-local-2295bqx_n8/tmpokmyow7d /root/.ansible/tmp/ansible-tmp-1718701712.9134433-213-60173677516755/AnsiballZ_synchronize.py\\n', b'')
<192.168.151.237> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.151.237> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o 'ControlPath="/runner/cp/63d63d053c"' 192.168.151.237 '/bin/sh -c '"'"'chmod u+x /root/.ansible/tmp/ansible-tmp-1718701712.9134433-213-60173677516755/ /root/.ansible/tmp/ansible-tmp-1718701712.9134433-213-60173677516755/AnsiballZ_synchronize.py && sleep 0'"'"''
<192.168.151.237> (0, b'', b'')
<192.168.151.237> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.151.237> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o 'ControlPath="/runner/cp/63d63d053c"' -tt 192.168.151.237 '/bin/sh -c '"'"'/usr/bin/python3 /root/.ansible/tmp/ansible-tmp-1718701712.9134433-213-60173677516755/AnsiballZ_synchronize.py && sleep 0'"'"''

Okay, not sure what’s happening, but this feels like the ssh connection is waiting for interactive input (despite StrictHostKeyChecking=no), then the timeout is kicking in, and it retries… forever. (All speculation of course.)

Just to clarify: is it the case that your AWX is connecting as root to the managed nodes rather than as a dedicated userid with sudo abilities? I mean, I’m seeing “root” all over the log entries, so I guess that’s the case. If so, does that also mean you never deal with privilege escalation because you’re already root everywhere? And you therefore never deal with become_user: either, again because you’re already root? If so, it seems like this would make some things easier, but at the same time, not, in the sense that ssh as root all over the place is scary beyond imagining. I don’t even do that at home when I’m just testing one-off throw-away stuff. (I’m trying to imagine how to replicate your set-up.)

Humor me, please, and copy-n-paste the final task as it was when it produced the log above. I want to make sure I understand what code you’re starting with.

Yeah its somewat weird so either the syncronize does not transmit the usersetting part, but the cert part. I gave my mental limited chatgpt the log an it sayd it could be an issuebecause my private key of awx has an password, but it would be the first playbook with this issue, and i switched the key wo pw and no diffrence in the end - but from this i came to the thought:
when i sync between two remote hosts does the awx somehow uses its private key for ath from source srv to the target srv: no you have to add the keys you need yourself.
And thats what i think schpuld be mentioned at least in the descrition of the mdule - for ppl like me its not that obious.
Playbook:
I tried probing around with the user added in the rsync_opts, but it fails and seem to try to connect with username@ip path username@ip path twice the data

---
- name: Generate SSH keypair on remote hosts and distribute keys
  hosts:
    - 192.168.19.201
    - 192.168.151.237
  become: true
  tasks:
    - name: Create a "sync_src" group
      ansible.builtin.group_by:
        key: sync_src
      when: ansible_default_ipv4.address == '192.168.151.237'

    - name: Create a "sync_dst" group
      ansible.builtin.group_by:
        key: sync_dst
      when: ansible_default_ipv4.address == '192.168.19.201'

- name: Create SSH keypair in /tmp
  hosts: localhost
  tasks:
    - name: Create SSH keypair in /tmp
      ansible.builtin.openssh_keypair:
        path: /tmp/id_rsa_remcpyusr
        type: rsa
        force: true
        comment: "remcpyusr"
      run_once: true

- name: Distribute the SSH public key
  hosts: sync_dst
  tasks:
    - name: Distribute the SSH public key
      ansible.builtin.copy:
        src: /tmp/id_rsa_remcpyusr.pub
        dest: /home/remcpyusr/.ssh/authorized_keys
        mode: '0644'
        force: yes

- name: Distribute the SSH private key
  hosts: sync_src
  tasks:
    - name: Distribute the SSH private key
      ansible.builtin.copy:
        src: /tmp/id_rsa_remcpyusr
        dest: /home/remcpyusr/.ssh/id_rsa
        mode: '0600'
        force: yes

- name: Set permissions for private key
  hosts: sync_src
  tasks:
    - name: Set permissions for private key
      ansible.builtin.file:
        path: /home/remcpyusr/.ssh/id_rsa
        owner: remcpyusr
        group: remcpyusr
        mode: '0600'

- name: Set permissions for public key
  hosts: sync_dst
  tasks:
    - name: Set permissions for public key
      ansible.builtin.file:
        path: /home/remcpyusr/.ssh/authorized_keys
        owner: remcpyusr
        group: remcpyusr
        mode: '0644'

- name: Synchronize files to target host
  hosts: sync_dst
  tasks:
    - name: Synchronize files to target host
      ansible.builtin.synchronize:
        src: '{{ item }}'
        dest: remcpyusr@{{ ansible_default_ipv4.address }}:{{ item }}
        #dest: {{ ansible_default_ipv4.address }}:{{ item }}
        rsync_opts:
          - "--rsh='ssh -i /home/remcpyusr/.ssh/id_rsa'"
          #- "--rsh='ssh -i /home/remcpyusr/.ssh/id_rsa -o StrictHostKeyChecking=no -l remcpyusr'"
      delegate_to: '{{ groups["sync_src"] | random }}'
      when: "'sync_dst' in group_names"
      loop:
        - /tmp/test.txt

The log is too tedious to split - the posts have a limit of 3k letters and the log has almost 19k
I hope you have an google account:

Too true. I went through the whole thing line by line (forcing myself awake every couple of lines).

Alas, the only interesting thing is what’s not there. At the end, we see the guts of the synchronize.py module get copied to your delegate_to: host, and a chmod u+x on the containing directory and the file itself:

Using module file /usr/share/ansible/collections/ansible_collections/ansible/posix/plugins/modules/synchronize.py
<192.168.151.237> PUT /runner/.ansible/tmp/ansible-local-2295bqx_n8/tmpokmyow7d TO /root/.ansible/tmp/ansible-tmp-1718701712.9134433-213-60173677516755/AnsiballZ_synchronize.py
<192.168.151.237> SSH: EXEC sftp -b - -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o 'ControlPath="/runner/cp/63d63d053c"' '[192.168.151.237]'
<192.168.151.237> (0, b'sftp> put /runner/.ansible/tmp/ansible-local-2295bqx_n8/tmpokmyow7d /root/.ansible/tmp/ansible-tmp-1718701712.9134433-213-60173677516755/AnsiballZ_synchronize.py\\n', b'')
<192.168.151.237> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.151.237> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o 'ControlPath="/runner/cp/63d63d053c"' 192.168.151.237 '/bin/sh -c '"'"'chmod u+x /root/.ansible/tmp/ansible-tmp-1718701712.9134433-213-60173677516755/ /root/.ansible/tmp/ansible-tmp-1718701712.9134433-213-60173677516755/AnsiballZ_synchronize.py && sleep 0'"'"''
<192.168.151.237> (0, b'', b'')

After that, we see it get run under /bin/sh with the -c parameter:

<192.168.151.237> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.151.237> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o 'ControlPath="/runner/cp/63d63d053c"' -tt 192.168.151.237 '/bin/sh -c '"'"'/usr/bin/python3 /root/.ansible/tmp/ansible-tmp-1718701712.9134433-213-60173677516755/AnsiballZ_synchronize.py && sleep 0'"'"''

Notably absent is any hint of the parameters you used on ansible.builtin.synchronize, or of any activity from that point forward.

Also crucially absent is any hint of the synchronize task running as remcpyusr rather than root. At this point of desperation, I would try adding “become_user: remcpyusr” on that final task at the same indentation level as delegate_to: and loop:. It’s grasping at straws really, but at this point we’re just trying to get it to behave differently. We don’t even care particularly whether that difference is effective; we’re just hoping for something noticeable.

If anybody else has another suggestion, we’d love to hear it!

The log you shared answers one of my other questions. Every new play causes a new round of facts gathering for each host. Most of the time (ignoring the infinite amount of time at the last task) is spent re-gathering host facts. That’s why you’d want to use when: "'blah_blah' in group_names" on tasks instead of starting new plays. Play setup is eating up your clock.

[Edit: you could add gather_facts: false on those later plays.]

Finally, drop those “Set permissions […]” tasks, and move the owner:, group:, and mode: settings up to the “Distribute […]” tasks. (The “Set permissions […]” tasks aren’t actually doing anything; the owner and group are already defaulting to remcpyusr, but it’s better to be explicit about them in the “Distribute […]” tasks.)

Thank you!
I now have a second testvm with sql and changed the playbook a little like you tolt me and i have noted my global wars i will test today with a test workflow.

---
- name: Generate SSH keypair on remote hosts and distribute keys
  hosts:
    - 192.168.151.141 # {{ sql_dest_srv }}
    - 192.168.151.237 # {{ sql_src_srv }}
  become: true

  tasks:
    - name: Create a "sync_src" group
      ansible.builtin.group_by:
        key: sync_src
      when: ansible_default_ipv4.address == '192.168.151.237' # {{ sql_src_srv }}

    - name: Create a "sync_dst" group
      ansible.builtin.group_by:
        key: sync_dst
      when: ansible_default_ipv4.address == '192.168.151.141' # {{ sql_dest_srv }}

- name: Create SSH keypair in /tmp
  hosts: localhost
  tasks:
    - name: Create SSH keypair in /tmp
      ansible.builtin.openssh_keypair:
        path: /tmp/id_rsa_remcpyusr
        type: rsa
        force: true
        comment: "remcpyusr"
      run_once: true

- name: Distribute the SSH public key
  hosts: sync_src
  tasks:
    - name: Distribute the SSH public key
      ansible.builtin.copy:
        src: /tmp/id_rsa_remcpyusr.pub
        dest: /home/remcpyusr/.ssh/authorized_keys
        owner: remcpyusr
        mode: '0644'
        force: yes

- name: Distribute the SSH private key
  hosts: sync_dst
  tasks:
    - name: Distribute the SSH private key
      ansible.builtin.copy:
        src: /tmp/id_rsa_remcpyusr
        dest: /home/remcpyusr/.ssh/id_rsa
        owner: remcpyusr
        mode: '0600'
        force: yes

- name: Synchronize files to target host
  hosts: sync_dst
  tasks:
    - name: Synchronize files to target host
      ansible.builtin.command:
        cmd: rsync -avz -e "ssh -i /home/remcpyusr/.ssh/id_rsa" remcpyusr@192.168.151.237:/tmp/test2.txt /tmp/ # {{ sql_src_srv }}

- name: Ensure key pair is absent on the controller and hosts
  hosts: all
  tasks:
    - name: Ensure key pair is absent on the controller and hosts
      ansible.builtin.file:
        path: "{{ item }}"
        state: absent
      loop:
        - /tmp/remcpyusr_key
        - /tmp/remcpyusr_key.pub
        - /home/remcpyusr/.ssh/authorized_keys
        - /home/remcpyusr/.ssh/id_rsa

I have to debug the grouping now:

[WARNING]: conditional statements should not include jinja2 templating
delimiters such as {{ }} or {% %}. Found: ansible_default_ipv4.address == '{{
sql_src_srv }}'
fatal: [192.168.151.141]: FAILED! => {
    "msg": "The conditional check 'ansible_default_ipv4.address == '{{ sql_src_srv }}'' failed. The error was: Conditional is marked as unsafe, and cannot be evaluated.\n\nThe error appears to be in '/runner/project/functions/SQL/migration/rsync_copy_workflow.yaml': line 9, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n  tasks:\n    - name: Create a \"sync_src\" group\n      ^ here\n"
}
[WARNING]: conditional statements should not include jinja2 templating
delimiters such as {{ }} or {% %}. Found: ansible_default_ipv4.address == '{{
sql_src_srv }}'
fatal: [192.168.151.237]: FAILED! => {
    "msg": "The conditional check 'ansible_default_ipv4.address == '{{ sql_src_srv }}'' failed. The error was: Conditional is marked as unsafe, and cannot be evaluated.\n\nThe error appears to be in '/runner/project/functions/SQL/migration/rsync_copy_workflow.yaml': line 9, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n  tasks:\n    - name: Create a \"sync_src\" group\n      ^ here\n"
}

---
- name: Generate SSH keypair on remote hosts and distribute keys
  hosts:
    - "{{ sql_dest_srv }}"  # {{ sql_dest_srv }} 192.168.151.141
    - "{{ sql_src_srv }}"   # {{ sql_src_srv }} 192.168.151.237
  become: true

  tasks:
    - name: Create a "sync_src" group
      ansible.builtin.group_by:
        key: sync_src
      when: ansible_default_ipv4.address == '{{ sql_src_srv }}'  # {{ sql_src_srv }} 192.168.151.237

    - name: Create a "sync_dst" group
      ansible.builtin.group_by:
        key: sync_dst
      when: ansible_default_ipv4.address == '{{ sql_dest_srv }}'  # {{ sql_dest_srv }} 192.168.151.141

- name: Create SSH keypair in /tmp
  hosts: localhost
  tasks:
    - name: Create SSH keypair in /tmp
      ansible.builtin.openssh_keypair:
        path: /tmp/id_rsa_remcpyusr
        type: rsa
        force: true
        comment: "remcpyusr"
      run_once: true

- name: Distribute the SSH public key
  hosts: sync_src
  tasks:
    - name: Distribute the SSH public key
      ansible.builtin.copy:
        src: /tmp/id_rsa_remcpyusr.pub
        dest: /home/remcpyusr/.ssh/authorized_keys
        owner: remcpyusr
        mode: '0644'
        force: yes

- name: Distribute the SSH private key
  hosts: sync_dst
  tasks:
    - name: Distribute the SSH private key
      ansible.builtin.copy:
        src: /tmp/id_rsa_remcpyusr
        dest: /home/remcpyusr/.ssh/id_rsa
        owner: remcpyusr
        mode: '0600'
        force: yes

- name: Synchronize files to target host
  hosts: sync_dst
  tasks:
    - name: Synchronize files to target host
      ansible.builtin.command:
        cmd: rsync -avz -e "ssh -i /home/remcpyusr/.ssh/id_rsa" remcpyusr@{{ sql_src_srv }}:/tmp/test2.txt /tmp/  # {{ sql_src_srv }} 192.168.151.237

- name: Ensure key pair is absent on the controller and hosts
  hosts: all
  tasks:
    - name: Ensure key pair is absent on the controller and hosts
      ansible.builtin.file:
        path: "{{ item }}"
        state: absent
      loop:
        - /tmp/id_rsa_remcpyusr
        - /tmp/id_rsa_remcpyusr.pub
        - /home/remcpyusr/.ssh/authorized_keys
        - /home/remcpyusr/.ssh/id_rsa

Done need to use the variables without {{}} at grouping

Now i have a question for a difrent playbook- i need to add another user to the sql server - i added him aleready as env, but the task fails because of permission issues:


TASK [Create MySQL user with necessary privileges] *****************************
fatal: [192.168.151.237]: FAILED! => {"changed": false, "msg": "unable to connect to database, check login_user and login_password are correct or /root/.my.cnf has the credentials. Exception message: (1698, \"Access denied for user 'root'@'localhost'\")"}

I thought when im logged in as root and set become true would be enough:
I dont have the root passwords of every server and it would be not good to seet the root password, because later other services or collegues cant reach the servers anymore.

---
- name: Create MySQL user with specific privileges and install required plugins to the host
  hosts:
    - "{{ sql_dest_srv }}"  # {{ sql_dest_srv }} 192.168.151.141
    - "{{ sql_src_srv }}"   # {{ sql_src_srv }} 192.168.151.237
  become: true
  vars:
    ansible_python_interpreter: /opt/myenv/bin/python3

  tasks:
    - name: Ensure pip and venv are installed
      ansible.builtin.apt:
        name: 
          - python3-pip
          - python3-venv
          - python3-dev
          - build-essential
          - libssl-dev
          - libffi-dev
          - libmysqlclient-dev
          - pkg-config
        state: present
      #become: yes

    - name: Create a virtual environment
      ansible.builtin.command:
        cmd: python3 -m venv /opt/myenv
      #become: yes
      args:
        creates: /opt/myenv

    - name: Install required Python packages in virtual environment
      ansible.builtin.pip:
        executable: /opt/myenv/bin/pip
        name:
          - pymysql
          - mysqlclient
      #become: yes

    - name: Create MySQL user with necessary privileges
      community.mysql.mysql_user:
        name: "{{ lookup('env', 'sql_user') }}"
        password: "{{ lookup('env', 'sql_passwd') }}"
        priv: '*.*:ALL,GRANT'
        host: '%'
        state: present
      #become: yes

My bot adds everywhere the become everywhere - even when i delete them and fter asking to just check for indentations WITHOUT changin anything else it sets the become everywhere again as if it want to mock me!?

How could i manage here the auth the best way?

Thank you again!

I don’t know what this “bot” is, but you might consider firing it.

2 Likes

I have a question about an qql related part of my sql migration project:
I need to add a user to export/import/delete/create databases - i have a env for the database so i could create a user just with the permissions for this database, but i have some questions in my mind which i cant solve:
1: when i need to delete the same db at the destination server to import the db how do the permissions behave?
2:I need an exisiting sql root user to add the service user with its permissions and it would not be smart to keep local root login at the servers active and use the root credentials for the useradd :face_with_raised_eyebrow:

when i try to use the module to manage users with community.mysql.mysql_user: i get the error that the credentiels in /root/.my.cnf is not correct to connect to ‘root’@‘localhost’ ,but when i enter locally “mysql” it uses the file to automatically login as root.
With this module i could ass temorarly the user and just need the root access file on each host.
I tried adding delegate to to the task but still the same error if the awx looks the file locally.
And the same error when using :

        login_user: "{{ default_user }}"
        login_password: "{{ default_pw }}"
---
- name: Create MySQL user with necessary privileges on specific host
  hosts: 192.168.151.237
  become: true
  vars:
    ansible_python_interpreter: /opt/myenv/bin/python3
  tasks:
    - name: Create MySQL user with necessary privileges
      community.mysql.mysql_user:
        name: testbob
        password: testbobs-password
        priv: '*.*:ALL,GRANT'
        host: '%'
        state: present
"msg": "unable to connect to database, check login_user and login_password are correct or /root/.my.cnf has the credentials. Exception message: (1698, \\"Access denied for user 'root'@'localhost'\\")"

Heres a snap of the mycnfcontent:

Thank you in Advance again!!!

I would expect that a socket connection is used by root when connecting to localhost, not the details from /root/.my.cnf?

Another user was having problems like this the other day, perhaps there might be some suggestions in this thread that help you?

Thank you!
I studied stackoverflow for the error code and found the solution:

login_unix_socket: /var/run/mysqld/mysqld.sock

but cant tell why.

Now i would set my sql server rootusers without login-get them a secure password and could now use the mysql_user and other modules to add a temp user with full permissions and remove him after the migration is done.
Should i add teste after each step to veryfy everything?

Hello!
My test vms are eunning fine , but a question came up about the necessary .my.cnf:
Is there a suggested method to hash the content to its not in clear text?
Did anyone of you did anything like this?

Thank you!

There is the MySQL 5.6 Obfuscated Authentication Credential Option:

MySQL 5.6 and above support an obfuscated authentication credential option file called .mylogin.cnf that is created with mysql_config_editor.

MariaDB does not support this. The passwords in MySQL’s .mylogin.cnf are only obfuscated, rather than encrypted, so the feature does not really add much from a security perspective. It is more likely to give users a false sense of security, rather than to seriously protect them.

These days the /root/.my.cnf file is not required because MariaDB and MySQL default to using socket authentication for the root user via localhost (connecting to 127.0.0.1 doesn’t use socket based authentication) on distros such as Debian and Ubuntu and I would assume that this is also the case on other distros? :person_shrugging:

The question is if i can auth like this , does this modules still functions?