Restart clustered service node by node?

Hi All :wink:

I currently am writing automation for a Percona Galera cluster which consists of 3 nodes. I am uploading a config file with:

  • name: upload my.cnf
    template: src=my.cnf.j2 dest=/etc/my.cnf
    when: “‘db_cluster’ in group_names”
    register: db_cluster_restart

When the above config file changes I would like to restart the db cluster node by node:

  1. restart mysql on 1st cluster node and wait for the restart to finish

  2. restart mysql on 2nd cluster node and wait for the restart to finish

  3. restart mysql on 3rd cluster node and wait for the restart to finish

My cluster group is defined this way:

[db-cluster]
db-cluster-0 ansible_ssh_host=192.168.0.1
db-cluster-1 ansible_ssh_host=192.168.0.2
db-cluster-2 ansible_ssh_host=192.168.0.3

What is the best way to achieve this kind of functionality?

BR,
Rafal.

set `serial: 1` at the play level it will force the tasks to all run
for one host at a time, combine that with wait_for to test that the
mysql db is back up.

Thanks for the tip. Overall serial looks good but it is usable only at the ‘play’ level. In my case I have one play which uploads a template for config file and does many other actions to configure the mysql cluster. I cannot afford setting serial=1 for the whole play and as I see in http://stackoverflow.com/questions/26685544/rolling-restart-with-ansible-handlers it is not possible to use serial for specific tasks and/or handlers only.

Are there any other options?

you can have more than 1 play in a playbook

- name: play1
  hosts: ...
  tasks:
     - do most of the work

- na