SQL migration playbooks questions

Hello its me again!

I have some customer enviroments wo all use the SQL CE as databases and were migrating the Contants manually at migrations and serverupdates.
For testing i installed MYSQL with the sakilla demo DB
I created a test playbook but it seems that i somehow have a misunderstanding in ansible.buildin.shell/command, because i get the error:
ERROR! ‘ansible.builtin.shell/commend’ is not a valid attribute for a Play.
I tried the playbook with and without “”.
Wen i got the principles i will replace the user/pw with env stuff.

My playbook is:

---
- name: backup Sakila-Db
  hosts: localhost
  become: yes 

- name: backup Sakila-DB
  ansible.builtin.shell:
        cmd: "mysqldump -u root -pmysuperinsecuretestpassword sakila > /tmp/sakila_backup.sql"
  register: result
  ignore_errors: yes

Thank you in advance again!

1 Like

It might be a spacing issue:

in the cmd: it seems there’s four spaces instead of two.

2 Likes

Definitely a spacing/indentation issue.

The first four lines are fine, down through become: yes true.

But after that, lined up with become:, you need tasks:

Then shift all the following lines to the right 4 spaces, starting with the second - name: backup Sakila-DB line.

It’s trying to interpret your ansible.builtin.shell task as the 2nd play in your playbook.

2 Likes

Alternatively, you can use community.mysql.mysql_db to create a dump. So there is no need to use the shell or command module.

    - name: dump sakila database
      community.mysql.mysql_db:
        login_host: localhost
        login_password: mysuperinsecuretestpassword
        login_port: 3306
        login_user: root
        single_transaction: true
        state: dump
        name: sakila
        target: "/tmp/sakila_backup.sql"
2 Likes

This looks nice!
Thank you!

To me it seems you could use both - i see everywhere the become yes version and mostly the true when connected to a user.
This playbook still fails with the same error:

---
- name: backup Sakila-Db
  hosts: localhost
  become: yes 

  tasks:
- name: backup Sakila-DB
  ansible.builtin.shell:
  cmd: mysqldump -u root -ppasswort sakila > /tmp/sakila_backup.sql
  register: result
  ignore_errors: yes

I drop everything in the yamlchecker and it seemed alright.
I even removed one spacing after another and the checker had no issues from 4 to 2 spaces… rather wird for me - maybe a bug.

cmd must be not on the same level as shell (the module name), because it is a parameter of the module itself…it must be a child item.
So yes, both are valid yaml. But only the code pasted below is valid ansible “code”

---
- name: backup Sakila-Db
  hosts: localhost
  become: yes 

  tasks:
- name: backup Sakila-DB
  ansible.builtin.shell:
    cmd: mysqldump -u root -ppasswort sakila > /tmp/sakila_backup.sql
  register: result
  ignore_errors: yes

And become: true does not change anything here for you. That means only, that the shell task is executed with privilges. So it’s not needed, because you use root credentials for your mysqldump.

I now habe a better idea what the playbook/s need to do in the end:
migrate databases vor dev/showcases and overwrite/delete if a DB with the same name exists.
Is there a method to move the dumps between the two servers, or do i need a network share to store the dump?

That’s not really ansible-related. It’s more a DBA topic.
But theoretically you can do

mysqldump -h somehost somedb | mysql -h otherhost somedb

Thank you!
This looks good.
I thouught it would be possible with
community.mysql.mysql_db:
and set an ssh/scp target.
This project will be my first workflow template, where a user choose the source host, DB name and traget host and maybe a required approvement.
I have to look this further up how to.

You didn’t finish fixing your indentation. It’s valid YAML, but it isn’t valid Ansible. Your task needs to be indented under the tasks: keyword, thus:

---
- name: backup Sakila-Db
  hosts: localhost
  become: yes 

  tasks:
    - name: backup Sakila-DB
      ansible.builtin.shell:
        cmd: mysqldump -u root -ppasswort sakila > /tmp/sakila_backup.sql
      register: result
      ignore_errors: yes

YAML checkers are fine, but you should try ansible-lint.

"valid YAML" != "valid Ansible"

1 Like

Looks interesting is this some kind of github/lab vs code plugin?

ansible-lint is a stand-alone CLI tool, like ansible-playbook. See https://ansible.readthedocs.io/projects/lint/installing/ Although, in my case, my distribution packages it up: python3-ansible-lint-24.2.0-1.fc40.noarch.

I will take a look tomorrow.
Can you tell me how a workflow sees if a playbook was “successfully” or not?
I want to include a test to check if someone made some impossible choices in the survey.
I have 3: 3source server, 3 dest server and DB Table and the servers are the same so i need to check if someone has the same source and dest server set.

I got the solution for may playbook errors:
Its been a while since the last time i used awx.
I used at hosts: localhost so the playbooks will be run at the AWX Kubernetes machine local where no sql stuff ist around.
set it to all and everything is alright.

1 Like

But i got an endless running playbook for the file transfer:
16 hours for a file with the size of 3MB.
I stopped it and first update my setup to latest version:

---
- name: Transfer MySQL Database Dump File
  hosts: all
  gather_facts: false

  tasks:
    - name: Transfer dump file from 192.168.151.237 to 192.168.19.201
      ansible.builtin.synchronize:
        src: "/tmp/sakila_backup.sql"
        dest: "/tmp/sakila_backup.sql"
        mode: push  
      delegate_to: 192.168.151.237  
      when: inventory_hostname == "192.168.19.201"

My ChatBot - Buddy mentioned its better to remove the passwort of the private key.
It would be the first playbook with a problem like this, but ill try it anyway.
the old log:

Enter passphrase for /runner/artifacts/2761/ssh_key_data: 
Identity added: /runner/artifacts/2761/ssh_key_data (/runner/artifacts/2761/ssh_key_data)
ansible-playbook [core 2.15.12rc1]
  config file = None
  configured module search path = ['/runner/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/local/lib/python3.9/site-packages/ansible
  ansible collection location = /runner/requirements_collections:/runner/.ansible/collections:/usr/share/ansible/collections
  executable location = /usr/local/bin/ansible-playbook
  python version = 3.9.18 (main, Jan 24 2024, 00:00:00) [GCC 11.4.1 20231218 (Red Hat 11.4.1-3)] (/usr/bin/python3)
  jinja version = 3.1.4
  libyaml = True
No config file found; using defaults
host_list declined parsing /runner/inventory/hosts as it did not pass its verify_file() method
Parsed /runner/inventory/hosts inventory source with script plugin
redirecting (type: action) ansible.builtin.synchronize to ansible.posix.synchronize
Skipping callback 'awx_display', as we already have a stdout callback.
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.

PLAYBOOK: sql_remote_dump-test.yaml ********************************************
1 plays in functions/SQL/migration/sql_remote_dump-test.yaml

PLAY [Transfer MySQL Database Dump File] ***************************************
redirecting (type: action) ansible.builtin.synchronize to ansible.posix.synchronize

TASK [Transfer dump file from 192.168.151.237 to 192.168.19.201] ***************
task path: /runner/project/functions/SQL/migration/sql_remote_dump-test.yaml:7
redirecting (type: action) ansible.builtin.synchronize to ansible.posix.synchronize
skipping: [192.168.151.237] => {
    "changed": false,
    "false_condition": "inventory_hostname == \\"192.168.19.201\\"",
    "skip_reason": "Conditional result was False"
}
redirecting (type: modules) ansible.builtin.synchronize to ansible.posix.synchronize
redirecting (type: action) ansible.builtin.synchronize to ansible.posix.synchronize
redirecting (type: action) ansible.builtin.synchronize to ansible.posix.synchronize
<192.168.151.237> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.151.237> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o 'ControlPath="/runner/cp/63d63d053c"' 192.168.151.237 '/bin/sh -c '"'"'echo ~root && sleep 0'"'"''
<192.168.151.237> (0, b'/root\\n', b"Warning: Permanently added '192.168.151.237' (ED25519) to the list of known hosts.\\r\\n")
<192.168.151.237> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.151.237> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o 'ControlPath="/runner/cp/63d63d053c"' 192.168.151.237 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo /root/.ansible/tmp `"&& mkdir "` echo /root/.ansible/tmp/ansible-tmp-1715783402.0129042-28-89502946805035 `" && echo ansible-tmp-1715783402.0129042-28-89502946805035="` echo /root/.ansible/tmp/ansible-tmp-1715783402.0129042-28-89502946805035 `" ) && sleep 0'"'"''
<192.168.151.237> (0, b'ansible-tmp-1715783402.0129042-28-89502946805035=/root/.ansible/tmp/ansible-tmp-1715783402.0129042-28-89502946805035\\n', b'')
<192.168.19.201> Attempting python interpreter discovery
<192.168.151.237> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.151.237> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o 'ControlPath="/runner/cp/63d63d053c"' 192.168.151.237 '/bin/sh -c '"'"'echo PLATFORM; uname; echo FOUND; command -v '"'"'"'"'"'"'"'"'python3.11'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python3.10'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python3.9'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python3.8'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python3.7'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python3.6'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python3.5'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'/usr/bin/python3'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'/usr/libexec/platform-python'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python2.7'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'/usr/bin/python'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python'"'"'"'"'"'"'"'"'; echo ENDFOUND && sleep 0'"'"''
<192.168.151.237> (0, b'PLATFORM\\nLinux\\nFOUND\\n/usr/bin/python3.10\\n/usr/bin/python3\\nENDFOUND\\n', b'')
<192.168.151.237> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.151.237> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o 'ControlPath="/runner/cp/63d63d053c"' 192.168.151.237 '/bin/sh -c '"'"'/usr/bin/python3.10 && sleep 0'"'"''
<192.168.151.237> (0, b'{"platform_dist_result": [], "osrelease_content": "PRETTY_NAME=\\\\"Ubuntu 22.04.4 LTS\\\\"\\\\nNAME=\\\\"Ubuntu\\\\"\\\\nVERSION_ID=\\\\"22.04\\\\"\\\\nVERSION=\\\\"22.04.4 LTS (Jammy Jellyfish)\\\\"\\\\nVERSION_CODENAME=jammy\\\\nID=ubuntu\\\\nID_LIKE=debian\\\\nHOME_URL=\\\\"https://www.ubuntu.com/\\\\"\\\\nSUPPORT_URL=\\\\"https://help.ubuntu.com/\\\\"\\\\nBUG_REPORT_URL=\\\\"https://bugs.launchpad.net/ubuntu/\\\\"\\\\nPRIVACY_POLICY_URL=\\\\"https://www.ubuntu.com/legal/terms-and-policies/privacy-policy\\\\"\\\\nUBUNTU_CODENAME=jammy\\\\n"}\\n', b'')
Using module file /usr/share/ansible/collections/ansible_collections/ansible/posix/plugins/modules/synchronize.py
<192.168.151.237> PUT /runner/.ansible/tmp/ansible-local-22df7j6lru/tmpv1jqciiq TO /root/.ansible/tmp/ansible-tmp-1715783402.0129042-28-89502946805035/AnsiballZ_synchronize.py
<192.168.151.237> SSH: EXEC sftp -b - -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o 'ControlPath="/runner/cp/63d63d053c"' '[192.168.151.237]'
<192.168.151.237> (0, b'sftp> put /runner/.ansible/tmp/ansible-local-22df7j6lru/tmpv1jqciiq /root/.ansible/tmp/ansible-tmp-1715783402.0129042-28-89502946805035/AnsiballZ_synchronize.py\\n', b'')
<192.168.151.237> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.151.237> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o 'ControlPath="/runner/cp/63d63d053c"' 192.168.151.237 '/bin/sh -c '"'"'chmod u+x /root/.ansible/tmp/ansible-tmp-1715783402.0129042-28-89502946805035/ /root/.ansible/tmp/ansible-tmp-1715783402.0129042-28-89502946805035/AnsiballZ_synchronize.py && sleep 0'"'"''
<192.168.151.237> (0, b'', b'')
<192.168.151.237> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.151.237> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o 'ControlPath="/runner/cp/63d63d053c"' -tt 192.168.151.237 '/bin/sh -c '"'"'/usr/bin/python3 /root/.ansible/tmp/ansible-tmp-1715783402.0129042-28-89502946805035/AnsiballZ_synchronize.py && sleep 0'"'"''

looks like part of the log is missing? is it failing to connect after removing the password ?