SQL migration playbooks questions

It just stops doing anything the playbooks runs for hours without any progress.
The longest run was 16 hours witht he same issue.

Enter passphrase for /runner/artifacts/2761/ssh_key_data: 
Identity added: /runner/artifacts/2761/ssh_key_data (/runner/artifacts/2761/ssh_key_data)
ansible-playbook [core 2.15.12rc1]
  config file = None
  configured module search path = ['/runner/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/local/lib/python3.9/site-packages/ansible
  ansible collection location = /runner/requirements_collections:/runner/.ansible/collections:/usr/share/ansible/collections
  executable location = /usr/local/bin/ansible-playbook
  python version = 3.9.18 (main, Jan 24 2024, 00:00:00) [GCC 11.4.1 20231218 (Red Hat 11.4.1-3)] (/usr/bin/python3)
  jinja version = 3.1.4
  libyaml = True
No config file found; using defaults
host_list declined parsing /runner/inventory/hosts as it did not pass its verify_file() method
Parsed /runner/inventory/hosts inventory source with script plugin
redirecting (type: action) ansible.builtin.synchronize to ansible.posix.synchronize
Skipping callback 'awx_display', as we already have a stdout callback.
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.

PLAYBOOK: sql_remote_dump-test.yaml ********************************************
1 plays in functions/SQL/migration/sql_remote_dump-test.yaml

PLAY [Transfer MySQL Database Dump File] ***************************************
redirecting (type: action) ansible.builtin.synchronize to ansible.posix.synchronize

TASK [Transfer dump file from 192.168.151.237 to 192.168.19.201] ***************
task path: /runner/project/functions/SQL/migration/sql_remote_dump-test.yaml:7
redirecting (type: action) ansible.builtin.synchronize to ansible.posix.synchronize
skipping: [192.168.151.237] => {
    "changed": false,
    "false_condition": "inventory_hostname == \\"192.168.19.201\\"",
    "skip_reason": "Conditional result was False"
}
redirecting (type: modules) ansible.builtin.synchronize to ansible.posix.synchronize
redirecting (type: action) ansible.builtin.synchronize to ansible.posix.synchronize
redirecting (type: action) ansible.builtin.synchronize to ansible.posix.synchronize
<192.168.151.237> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.151.237> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o 'ControlPath="/runner/cp/63d63d053c"' 192.168.151.237 '/bin/sh -c '"'"'echo ~root && sleep 0'"'"''
<192.168.151.237> (0, b'/root\\n', b"Warning: Permanently added '192.168.151.237' (ED25519) to the list of known hosts.\\r\\n")
<192.168.151.237> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.151.237> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o 'ControlPath="/runner/cp/63d63d053c"' 192.168.151.237 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo /root/.ansible/tmp `"&& mkdir "` echo /root/.ansible/tmp/ansible-tmp-1715783402.0129042-28-89502946805035 `" && echo ansible-tmp-1715783402.0129042-28-89502946805035="` echo /root/.ansible/tmp/ansible-tmp-1715783402.0129042-28-89502946805035 `" ) && sleep 0'"'"''
<192.168.151.237> (0, b'ansible-tmp-1715783402.0129042-28-89502946805035=/root/.ansible/tmp/ansible-tmp-1715783402.0129042-28-89502946805035\\n', b'')
<192.168.19.201> Attempting python interpreter discovery
<192.168.151.237> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.151.237> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o 'ControlPath="/runner/cp/63d63d053c"' 192.168.151.237 '/bin/sh -c '"'"'echo PLATFORM; uname; echo FOUND; command -v '"'"'"'"'"'"'"'"'python3.11'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python3.10'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python3.9'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python3.8'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python3.7'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python3.6'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python3.5'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'/usr/bin/python3'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'/usr/libexec/platform-python'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python2.7'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'/usr/bin/python'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python'"'"'"'"'"'"'"'"'; echo ENDFOUND && sleep 0'"'"''
<192.168.151.237> (0, b'PLATFORM\\nLinux\\nFOUND\\n/usr/bin/python3.10\\n/usr/bin/python3\\nENDFOUND\\n', b'')
<192.168.151.237> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.151.237> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o 'ControlPath="/runner/cp/63d63d053c"' 192.168.151.237 '/bin/sh -c '"'"'/usr/bin/python3.10 && sleep 0'"'"''
<192.168.151.237> (0, b'{"platform_dist_result": [], "osrelease_content": "PRETTY_NAME=\\\\"Ubuntu 22.04.4 LTS\\\\"\\\\nNAME=\\\\"Ubuntu\\\\"\\\\nVERSION_ID=\\\\"22.04\\\\"\\\\nVERSION=\\\\"22.04.4 LTS (Jammy Jellyfish)\\\\"\\\\nVERSION_CODENAME=jammy\\\\nID=ubuntu\\\\nID_LIKE=debian\\\\nHOME_URL=\\\\"https://www.ubuntu.com/\\\\"\\\\nSUPPORT_URL=\\\\"https://help.ubuntu.com/\\\\"\\\\nBUG_REPORT_URL=\\\\"https://bugs.launchpad.net/ubuntu/\\\\"\\\\nPRIVACY_POLICY_URL=\\\\"https://www.ubuntu.com/legal/terms-and-policies/privacy-policy\\\\"\\\\nUBUNTU_CODENAME=jammy\\\\n"}\\n', b'')
Using module file /usr/share/ansible/collections/ansible_collections/ansible/posix/plugins/modules/synchronize.py
<192.168.151.237> PUT /runner/.ansible/tmp/ansible-local-22df7j6lru/tmpv1jqciiq TO /root/.ansible/tmp/ansible-tmp-1715783402.0129042-28-89502946805035/AnsiballZ_synchronize.py
<192.168.151.237> SSH: EXEC sftp -b - -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o 'ControlPath="/runner/cp/63d63d053c"' '[192.168.151.237]'
<192.168.151.237> (0, b'sftp> put /runner/.ansible/tmp/ansible-local-22df7j6lru/tmpv1jqciiq /root/.ansible/tmp/ansible-tmp-1715783402.0129042-28-89502946805035/AnsiballZ_synchronize.py\\n', b'')
<192.168.151.237> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.151.237> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o 'ControlPath="/runner/cp/63d63d053c"' 192.168.151.237 '/bin/sh -c '"'"'chmod u+x /root/.ansible/tmp/ansible-tmp-1715783402.0129042-28-89502946805035/ /root/.ansible/tmp/ansible-tmp-1715783402.0129042-28-89502946805035/AnsiballZ_synchronize.py && sleep 0'"'"''
<192.168.151.237> (0, b'', b'')
<192.168.151.237> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.151.237> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o 'ControlPath="/runner/cp/63d63d053c"' -tt 192.168.151.237 '/bin/sh -c '"'"'/usr/bin/python3 /root/.ansible/tmp/ansible-tmp-1715783402.0129042-28-89502946805035/AnsiballZ_synchronize.py && sleep 0'"'"''

Do i need to set the private key from the awx server in the hosts as well?
I would need the key when i copy directly from one host to another. And i dont know how the playbook uses here the ssh auth wen the “local” host is not reall the awx server itself.
When using ansible.builtin.synchronize: and delegate to a certain host the host i choose soesnt it need a set of private key and the poublic cey at the server where the files sould be rsynced to?

Yeah i added the private key of my awx server ~/.ssh/id_rsa to the Host i delegated the playbook to and the file got copied:

---
- name: Transfer MySQL Database Dump File
  hosts: all
  gather_facts: false

  tasks:
    - name: Transfer dump file from 192.168.151.237 to 192.168.19.201
      ansible.builtin.synchronize:
        src: "/tmp/sakila_backup.sql"
        dest: "/tmp/sakila_backup.sql"
        mode: push  
      delegate_to: 192.168.151.237  
      when: inventory_hostname == "192.168.19.201"

I think i will create a seperate key/user for the file transfer so i dont have all linux hosts reachable to each other as root way too easy for the potential bad guys.

Hello!
would like to know if itrs usual that when i copy with ansible syncronize between two remote hosts i have to add the private keys to them as well because what dicovered is that my playbooks fail until i put the private key to the hosts as well.
Is this wanted or should i ask in the awx git?
Thank you!

The synchronize module isn’t magic. The initiating side of the connection must have a “private key” (or as I think of it, a “key”) and the non-initiating side must have the corresponding “public key” (or as I think of it, a “lock”) in the correctly configured place, which is probably /home/<some_user>/.ssh/authorized_keys. It is not necessary for the non-initiating side to have any “private key”.

You could generate a key pair at the beginning of your run, put the “public key” (the “lock” in my brain) on any hosts you want synchronize to connect to, and only place the “private key” (my “key”) on the host you want to connect from. After all your syncing, remove at least the “private key”, but also any copies of the now-useless “public key” you placed on other hosts. Extra points if you adopt a naming convention that allows you to clean up any of these ephemeral keys even after their creating job fails.

I would strongly recommend not touching “the” AWX key pair on any hosts. You risk breaking AWX itself or the ability to manage any number of impacted hosts.

Thank you!
I Think i have to do it this way because you can just select one private key for auth in a template so for the playbooks in the playbooks itself.
I have a playbook which creates a user for the rsync auth and add the keys, but only with permissions to rw to /tmp.
Such a simple step is quite challenging .

Hello!
Im now at the part where i try to make a test playbook to move a file with temporarly added keys and a custom user which aleready exist on both ends but i get errors like “module_stderr”: “/bin/sh: line 1: sudo: command not found\n” even with the playbook at all
The keys get generated too thatnks to my new ee i added for unsing the Rocketchat public modules(party hardt!)
but it seems it failes at moving the generated keys into the user folder, because there is no content in /home/remcpyusr/.ssh/id_rsa from the generated key in /tmp.

Heres the playbook:

---
- name: Setup SSH keys for remcpyusr and transfer file
  hosts: all
  become: true
  tasks:
    - name: Check if private key exists on control node
      ansible.builtin.stat:
        path: /tmp/remcpyusr_key
      delegate_to: localhost
      register: private_key_stat
      run_once: true

    - name: Check if public key exists on control node
      ansible.builtin.stat:
        path: /tmp/remcpyusr_key.pub
      delegate_to: localhost
      register: public_key_stat
      run_once: true

    - name: Remove existing private key if it exists
      ansible.builtin.file:
        path: /tmp/remcpyusr_key
        state: absent
      delegate_to: localhost
      when: private_key_stat.stat.exists
      run_once: true

    - name: Remove existing public key if it exists
      ansible.builtin.file:
        path: /tmp/remcpyusr_key.pub
        state: absent
      delegate_to: localhost
      when: public_key_stat.stat.exists
      run_once: true

    - name: Generate SSH key pair on control node
      community.crypto.openssh_keypair:
        path: /tmp/remcpyusr_key
        type: rsa
        size: 2048
      delegate_to: localhost
      run_once: true
      register: ssh_key

    - name: Copy private key to source host
      ansible.builtin.copy:
        src: /tmp/remcpyusr_key
        dest: /home/remcpyusr/.ssh/id_rsa
        owner: remcpyusr
        group: remcpyusr
        mode: '0600'
      delegate_to: 192.168.151.237
      run_once: true

    - name: Copy public key to target host
      ansible.builtin.copy:
        src: /tmp/remcpyusr_key.pub
        dest: /home/remcpyusr/.ssh/authorized_keys
        owner: remcpyusr
        group: remcpyusr
        mode: '0644'
      delegate_to: 192.168.19.201
      run_once: true

- name: Transfer file from source to target
  hosts: 192.168.151.237
  become: true
  tasks:
    - name: Synchronize file to target host
      ansible.builtin.synchronize:
        src: /tmp/test.txt
        dest: remcpyusr@192.168.19.201:/tmp/test.txt
        rsync_opts:
          - "--rsh='ssh -i /home/remcpyusr/.ssh/id_rsa'"

- name: Clean up SSH keys
  hosts: all
  become: true
  tasks:
    - name: Remove private key from source host
      ansible.builtin.file:
        path: /home/remcpyusr/.ssh/id_rsa
        state: absent
      delegate_to: 192.168.151.237
      run_once: true

    - name: Remove public key from target host
      ansible.builtin.lineinfile:
        path: /home/remcpyusr/.ssh/authorized_keys
        regexp: '^{{ ssh_key.public_key }}$'
        state: absent
      delegate_to: 192.168.19.201
      run_once: true

    - name: Remove generated keys from control node
      ansible.builtin.file:
        path: /tmp/remcpyusr_key
        state: absent
      delegate_to: localhost
      run_once: true

    - name: Remove generated public key from control node
      ansible.builtin.file:
        path: /tmp/remcpyusr_key.pub
        state: absent
      delegate_to: localhost
      run_once: true

Thak you again!

Im rather confused, because it seems that the key get generated at the AWX local and needs to be copied, but somehow at both hosts are both of tose keys in tmp - is it that i coose at hosts all it ignores if the task is delegated to localost(AWX?) :face_with_raised_eyebrow:

I starten step to step and got this one running - i like this one better, because it will be better to be integrated into a workflow wit variables:
The only problem with this left is that it copies the file to itself, but i see the changing date gets a diffrent one when the Playbook did its job.
thanks for your inpout again!

---
- name: Generate SSH keypair on remote hosts
  hosts:
    - 192.168.19.201
    - 192.168.151.237
  become: yes
  tasks:
    - name: Create SSH keypair in /tmp
      ansible.builtin.openssh_keypair:
        path: /tmp/id_rsa
        type: rsa
        force: yes
        comment: "remcpyusr"
        size: 2048
      register: keypair

    - name: Ensure /home/remcpyusr/.ssh directory exists
      ansible.builtin.file:
        path: /home/remcpyusr/.ssh
        state: directory
        mode: '0700'
        owner: remcpyusr
        group: remcpyusr

    - name: Copy private key to /home/remcpyusr/.ssh/id_rsa
      ansible.builtin.copy:
        src: /tmp/id_rsa
        dest: /home/remcpyusr/.ssh/id_rsa
        owner: root
        group: root
        mode: '0600'
        force: yes
        remote_src: true

    - name: Copy public key to /home/remcpyusr/.ssh/authorized_keys
      ansible.builtin.copy:
        src: /tmp/id_rsa.pub
        dest: /home/remcpyusr/.ssh/authorized_keys
        owner: root
        group: root
        mode: '0644'
        force: yes
        remote_src: true

    - name: Set permissions for private key
      ansible.builtin.file:
        path: /home/remcpyusr/.ssh/id_rsa
        owner: remcpyusr
        group: remcpyusr
        mode: '0600'

    - name: Set permissions for public key
      ansible.builtin.file:
        path: /home/remcpyusr/.ssh/authorized_keys
        owner: remcpyusr
        group: remcpyusr
        mode: '0644'

- name: Transfer test file from 192.168.151.237 to 192.168.19.201
  hosts: 192.168.151.237
  become: yes
  tasks:
    - name: Ensure /tmp/test.txt exists
      ansible.builtin.file:
        path: /tmp/test.txt
        state: touch

    - name: Transfer test.txt to 192.168.19.201
      ansible.builtin.synchronize:
        src: /tmp/test.txt
        dest: /tmp/test.txt
        mode: push
        rsync_opts:
          - "--rsh='ssh -o StrictHostKeyChecking=no -l remcpyusr'"
      delegate_to: 192.168.151.237  
      when: inventory_hostname == "192.168.19.201"
      become: no

Several things are worth commenting on here.

Calling “stat” before “absent

It looks like you’re still thinking in terms of scripting/programming rather than idempotency. You’re checking the status of a couple of files, and if you find them you delete them. That’s scripting.

You want it to be the case that neither file exists on the ansible controller (localhost). So state that. That’s idempotency. (See next section.)

Pairs of tasks rather than loop:

You can accomplish the first four tasks in one idempotent task:

    - name: Ensure key pair is absent on the controller
      ansible.builtin.file:
        path: '{{ item }}'
        state: absent
      delegate_to: localhost
      run_once: true
      loop:
        - /tmp/remcpyusr_key
        - /tmp/remcpyusr_key.pub

There is no need to check if the files exist first. I promise you, Ansible won’t remove them if they aren’t there.

Questionable ("delegate_to:", "run_once:") pattern

Using both “delegate_to:” and “run_once:” is fine when you need to ensure the controller is in a certain state. When you aren’t trying to ensure a certain state on the controller, either “delegate_to:” or “run_once:” may be appropriate. But when you have both on the same task and delegate_to: is anything other than the controller, that very likely indicates a deficiency with your inventory.

In your case, you need (and I’m making up names here, but you get the idea) a “sync_srcgroup and a “sync_dstgroup. Now, either or both of those groups may only contain one host, but you should at least make it possible to work with multiple “sync_dst” hosts. If you do it right, you can do this in one play rather than three. (Unnecessary multiple plays may also indicate an inventory deficiency.)

Another way to look at it: If every task has “delegate_to:” and “run_once:” on it, you’re probably re-implementing (badly) an inventory system. Have a re-think, and take advantage of the flexible inventory system Ansible makes available to you. It’ll make your code easier to read, and probably more capable.

In particular, I think your “Copy private key to source host” task is at best confusing, and maybe even broken. It should be this simple:

    - name: Copy private key to source hosts
      ansible.builtin.copy:
        src: /tmp/remcpyusr_key
        dest: /home/remcpyusr/.ssh/id_rsa
        owner: remcpyusr
        group: remcpyusr
        mode: '0600'
      when: "'sync_src' in group_names"

    - name: Copy public key to target hosts
      ansible.builtin.copy:
        src: /tmp/remcpyusr_key.pub
        dest: /home/remcpyusr/.ssh/authorized_keys
        owner: remcpyusr
        group: remcpyusr
        mode: '0644'
      when: "'sync_dst' in group_names"

Your “Transfer file from source to target” play becomes a single task (generalized to handle multiple files):

    - name: Synchronize files to target host
      ansible.builtin.synchronize:
        src: '{{ item }}'
        dest: remcpyusr@{{ ansible_default_ipv4.address }}:{{ item }}
        rsync_opts:
          - "--rsh='ssh -i /home/remcpyusr/.ssh/id_rsa'"
      delegate_to: '{{ groups["sync_src"] | random }}'
      when: "'sync_dst' in group_names"
      loop:
        - /tmp/test.txt

Key Clean-up

This also becomes neater with appropriate “group think”:

    - name: Remove private key from source hosts
      ansible.builtin.file:
        path: /home/remcpyusr/.ssh/id_rsa
        state: absent
      when: "'sync_src' in group_names"

    - name: Remove public key from target hosts
      ansible.builtin.lineinfile:
        path: /home/remcpyusr/.ssh/authorized_keys
        regexp: '^{{ ssh_key.public_key }}$'
        state: absent
      when: "'sync_dst' in group_names"

Do let us know whether any of this makes sense or helps improve your process. I’ll be curious to know how it turns out.

2 Likes

this looks good -the too much stuff is what the chatgpt added on its own - i often told just to add an simple feature it changed the whole playbook too from time to time and mage it vbery difficult to keep everything neat and clean. I tried some of those awx plugins but - i dont know this thing ofen uses wrong/non existing variables and all this fun all day long :rofl:
Can i keep the two hosts in the beginning of the playbook then?

Thank you again!! and have a nice weekend!

Yes you can. You can do that and still set the groups in the same playbook. That will keep everything in one file. You don’t have to modify your “real” inventory with these one-off groups. Of course, you can later once you start integrating this into your regular work flow.

---
- name: Generate SSH keypair on remote hosts
  hosts:
    - 192.168.19.201
    - 192.168.151.237
  become: true
  tasks:
    - name: Create a "sync_src" group
      ansible.builtin.group_by:
        key: sync_src
      when: ansible_default_ipv4.address == '192.168.151.237'

    - name: Create a "sync_dst" group
      ansible.builtin.group_by:
        key: sync_dst
      when: ansible_default_ipv4.address == '192.168.19.201'

    [… other tasks which use `when: "'sync_blah' in group_names"` expressions …]
1 Like

Thank you i just had to add to the copy the remote source and force, because somehow the key get aleready generated at both hosts in /tmp so i just have to move them locally on each of the hosts.
The playbook now really takes it time :laughing:

I have a problem of copying the files from the operator to the clients - it tellst the file doesnt exist.
In my log i see the keys get gererated at the two test hostst but there is no part where the key gets generated for localhost - i even added - localhost in the host list, but no diffrence…

---
- name: Generate SSH keypair on remote hosts and distribute keys
  hosts:
    - 192.168.19.201
    - 192.168.151.237
    - localhost
  become: true
  tasks:
    - name: Create a "sync_src" group
      ansible.builtin.group_by:
        key: sync_src
      when: ansible_default_ipv4.address == '192.168.151.237'

    - name: Create a "sync_dst" group
      ansible.builtin.group_by:
        key: sync_dst
      when: ansible_default_ipv4.address == '192.168.19.201'

    - name: Create SSH keypair in /tmp
      ansible.builtin.openssh_keypair:
        path: /tmp/id_rsa_remcpyusr
        type: rsa
        force: yes
        comment: "remcpyusr"
        size: 2048



    - name: Distribute the SSH public key
      ansible.builtin.copy:
        src: /tmp/id_rsa_remcpyusr.pub
        dest: /tmp/id_rsa_remcpyusr.pub
        mode: '0644'
        force: yes
        

    - name: Distribute the SSH private key
      ansible.builtin.copy:
        src: /tmp/id_rsa_remcpyusr
        dest: /tmp/id_rsa_remcpyusr
        mode: '0600'
        force: yes

The “Create SSH keypair in /tmp” is one task you do want to run on the controller: “delegate_to: localhost” and “run_once: true”. Likewise for “Remove generated key pair from the control node”.

    - name: Remove generated key pair from control node
      ansible.builtin.file:
        path: '{{ item }}'
        state: absent
      loop:
        - /tmp/remcpyuser_key
        - /tmp/remcpyuser_key.pub
      delegate_to: localhost
      run_once: true
1 Like

I have rather the problem that i cant get any file i ghenerate in localhost - i thing it has something with this to do.
even when i create a file local and run a test if the file exists ist not there
They suggest to create a container with podman and add it to the AWX as it seems.

---
- name: Generate and manage test file whereami
  hosts: localhost
  tasks:
    - name: Generate test file whereami on localhost
      ansible.builtin.command:
        cmd: echo "This is a test file named whereami" > /tmp/whereami

    - name: Verify that the test file whereami exists on localhost
      ansible.builtin.stat:
        path: /tmp/whereami
      register: whereami_file

    - name: Fail if the test file whereami does not exist on localhost
      ansible.builtin.fail:
        msg: "The test file whereami does not exist on localhost."
      when: not whereami_file.stat.exists

That “ansible.builtin.command” task can’t work, because I/O redirection (“> /tmp/whereami”) is a shell function, not a command function.

Try the same test with “ansible.builtin.shell”.

3 Likes

I cant add delegate to and run once to ansible.builtin.openssh_keypair
cant add a “localhost” host in the inventory because no ssh connection possible - no keys

fatal: [192.168.151.237]: FAILED! => {"changed": false, "msg": "Unsupported parameters for (ansible.builtin.openssh_keypair) module: delegate_to, run_once. Supported parameters include: attributes, backend, comment, force, group, mode, owner, passphrase, path, private_key_format, regenerate, selevel, serole, setype, seuser, size, state, type, unsafe_writes (attr)."}

That looks like a white space issue, delegate_to and run_once should be indented as much as ansible.builtin.openssh_keypair is, not more, see the example above.

Okay i have to make a loop with the key generation?