Thank you!
I see my playbooks get no error with those files deleted - very nice!
After some checking i noted i need to add someoh that the target hosts need to be added as authorized hosts, when i use the playbooko it stops after using the rysnc manually at ther hostst and add the host to the known_hosts the playbook does its job.
This is the current PB:
---
- name: Generate SSH keypair on remote hosts and distribute keys
hosts:
- "{{ sql_dest_srv }}" # {{ sql_dest_srv }} 192.168.151.141
- "{{ sql_src_srv }}" # {{ sql_src_srv }} 192.168.151.237
become: true
tasks:
- name: Create a "sync_src" group
ansible.builtin.group_by:
key: sync_src
when: ansible_default_ipv4.address == sql_src_srv # {{ sql_src_srv }} 192.168.151.237
- name: Create a "sync_dst" group
ansible.builtin.group_by:
key: sync_dst
when: ansible_default_ipv4.address == sql_dest_srv # {{ sql_dest_srv }} 192.168.151.141
- name: Create SSH keypair in /tmp
hosts: localhost
tasks:
- name: Create SSH keypair in /tmp
ansible.builtin.openssh_keypair:
path: /tmp/id_rsa_remcpyusr
type: rsa
force: true
comment: "remcpyusr"
run_once: true
- name: Distribute the SSH public key
hosts: sync_src
tasks:
- name: Distribute the SSH public key
ansible.builtin.copy:
src: /tmp/id_rsa_remcpyusr.pub
dest: /home/remcpyusr/.ssh/authorized_keys
owner: remcpyusr
mode: '0644'
force: yes
- name: Distribute the SSH private key
hosts: sync_dst
tasks:
- name: Distribute the SSH private key
ansible.builtin.copy:
src: /tmp/id_rsa_remcpyusr
dest: /home/remcpyusr/.ssh/id_rsa
owner: remcpyusr
mode: '0600'
force: yes
- name: Synchronize files to target host
hosts: sync_dst
tasks:
- name: Synchronize files to target host
ansible.builtin.command:
cmd: rsync -avz -e "ssh -i /home/remcpyusr/.ssh/id_rsa" remcpyusr@{{ sql_src_srv }}:/tmp/{{ sql_src_db }}.sql /tmp/ # {{ sql_src_srv }} 192.168.151.237
- name: Ensure key pair is absent on the controller and hosts
hosts: all
tasks:
- name: Ensure key pair is absent on the controller and hosts
ansible.builtin.file:
path: "{{ item }}"
state: absent
loop:
- /tmp/id_rsa_remcpyusr
- /tmp/id_rsa_remcpyusr.pub
- /home/remcpyusr/.ssh/authorized_keys
- /home/remcpyusr/.ssh/id_rsa
Sorry, but youâve lost me there. I canât tell if youâre showing us something you discovered, asking for an explanation, or requesting assistance. Could you re-state it?
Sorry what i meant is i need to either add e way of accepting the first time connection wia ssh (rync etc via a playbook or have to add the hosts in the /.ssh/known_hosts manually.
I just made the discovery(Im not the huge Linux all day user until last summer, where i changed my employer, where i found that my previos 8 years in it administration was quite easygoing with guis everywhere) that if a device connect the first time via this ssh:22 based connection to a new host the first time you have to accept it - and so i saw the playbook got in a loop probably waiting for the âyesâ input to add the host to to the known_hosts.
so im asking is there something like a -y parameter whick accepts every request automactically?
My bot has propably an idea: ansible.builtin.known_hosts:
I try -o StrictHostKeyChecking=no in my rsync command.
I would relcome the syncronize module, but even at GL where i made a not bugrelated ticket i got no feedback until now - i think if its no bug ticket they doesnt have thie time to to look at those tooâŚ
Maybe ill ask reddit too. ![]()
Now im struggling with adding in my variable failsafe playbook the feature to ckeck if the choosen db even exists at the chossen source host.
I get the message the DB is missing.
And i noted that my playbook to add the sql migration user successfully executes without login_user etc. - so couldnt i leave the migration user and use those community.mysql. modules without another dedicated user?
---
- name: Display and compare extra variables and fail if they match
hosts:
- "{{ sql_dest_srv }}" # {{ sql_dest_srv }} 192.168.151.141
- "{{ sql_src_srv }}" # {{ sql_src_srv }} 192.168.151.237
#gather_facts: false
vars:
ansible_python_interpreter: /opt/myenv/bin/python3
tasks:
- name: Display the value of 'sql_src_srv'
ansible.builtin.debug:
msg: "The value of sql_src_srv is {{ sql_src_srv }}"
- name: Display the value of 'sql_src_db'
ansible.builtin.debug:
msg: "The value of sql_src_db is {{ sql_src_db }}"
- name: Display the value of 'sql_dest_srv'
ansible.builtin.debug:
msg: "The value of sql_dest_srv is {{ sql_dest_srv }}"
- name: Check if 'sql_src_srv' and 'sql_dest_srv' have the same content and fail if true
ansible.builtin.fail:
msg: "NONONONONONO - The values of sql_src_srv and sql_dest_srv are identical, which is not allowed."
when: sql_src_srv == sql_dest_srv
- name: Check if database {{ sql_src_db }} exists at {{ sql_src_srv }}
community.mysql.mysql_db:
name: "{{ sql_src_db }}"
state: present
login_host: "{{ sql_src_srv }}"
login_unix_socket: "/var/run/mysqld/mysqld.sock"
register: result
failed_when: result.failed
changed_when: false
- name: Fail if chosen database does not exist at source host
ansible.builtin.fail:
msg: "The Sakila database does not exist."
when: result.msg is defined and '"sakila" does not exist' in result.msg
Can you even use the community.mysql.mysql_db for checking?
Got the playbook running but it just created the dbâŚ
if not is there a method using the same auth from those modules for the shell method?
Otherwhise i have to use the sql serviceuser anyway⌠![]()
Thank You in adavance again!