HowTo run tasks only after a snapshot was successfully taken?

Hi @all,

let me explain my scenario first, before I come to my question.

We run most of our linux servers as virtual machines in VMware vSphere Clusters. I would like to use a playbook that takes a snapshot from a vm befor it runs further tasks on the target node. In case of a broken system after an upgrade I could revert the snapshot and I’m up and running again.

Here is a not working example playbook:

`

let me explain my scenario first, before I come to my question.

We run most of our linux servers as virtual machines in VMware vSphere
Clusters. I would like to use a playbook that takes a snapshot from a vm
befor it runs further tasks on the target node. In case of a broken system
after an upgrade I could revert the snapshot and I'm up and running again.

Here is a not working example playbook:
---
- hosts: localhost
  connection: local

Instead of using hosts: localhost, you could use the inventory hostname or the group.
Since you have connection: local it will run on the controller host.

The benefits is that if you use "register: some_variable" then the variable will be registered on the host and not localhost.

  tasks:
    - name: snapshot operation
      vmware_guest_snapshot:
        hostname: "{{ vc }}"
        username: "{{ vc_user }}"
        password: "{{ vc_passwd }}"
        datacenter: "{{ vc_datacenter }}"
        folder: /mydatacenter/vm/"{{ item.folder }}"/
        name: "{{ item.vm }}"
        state: absent
        snapshot_name: snap1
        description: "Fuegen Sie hier Ihre Beschreibung ein"
        validate_certs: false
      delegate_to: localhost
      with_item:
        - "{{ nested_list }}"

Then you could use "register: status_snapshot" on this task, and check it's status in the next play.

- hosts: staging
  tasks:
    - name: run yum -y update
      yum:
        name: "*"
        state: latest

Variables in one play is available in subsequent play in the same playbook.
So you can then use the variable status_snapshot to decide what to do in this play.

The group 'staging' in the second play contains the hosts that should be
updated after a snapshot was taken. In the example above ansible would at
first try to create snapshots and at second update my nodes whether the
snapshot creation was successful or not. I have no idea how to create a
dependency from the second play to the first one.

Do you understand what I'm trying to accomplish? Do someone have an idea on
how to solve this?

Yes it's understandable what you are trying to do, hopefully my suggestion give you a tip on how to achieve it.

Here is a not working example playbook:
---
- hosts: localhost
  connection: local

Instead of using hosts: localhost, you could use the inventory hostname or the group.
Since you have connection: local it will run on the controller host.

The benefits is that if you use "register: some_variable" then the variable will be registered on the host and not localhost.

  tasks:
    - name: snapshot operation
      vmware_guest_snapshot:
        hostname: "{{ vc }}"
        username: "{{ vc_user }}"
        password: "{{ vc_passwd }}"
        datacenter: "{{ vc_datacenter }}"
        folder: /mydatacenter/vm/"{{ item.folder }}"/
        name: "{{ item.vm }}"
        state: absent
        snapshot_name: snap1
        description: "Fuegen Sie hier Ihre Beschreibung ein"
        validate_certs: false
      delegate_to: localhost
      with_item:
        - "{{ nested_list }}"

Then you could use "register: status_snapshot" on this task, and check it's status in the next play.

Ok, I'll try that. Since one of the common return values of ansible
modules is
[failed](http://docs.ansible.com/ansible/latest/common_return_values.html#id5)
I would like to use it in my play. But how could I use it? Do I simply
have to use "register: failed" to check it in the next play?

- hosts: staging
  tasks:
    - name: run yum -y update
      yum:
        name: "*"
        state: latest

Variables in one play is available in subsequent play in the same playbook.
So you can then use the variable status_snapshot to decide what to do in this play.

Could you give an example how to evaluate the variable, please?

Best regards,
Joerg

>
> Then you could use "register: status_snapshot" on this task, and check it's status in the next play.

Ok, I'll try that. Since one of the common return values of ansible
modules is
[failed](http://docs.ansible.com/ansible/latest/common_return_values.html#id5)
I would like to use it in my play. But how could I use it? Do I simply
have to use "register: failed" to check it in the next play?

When a task fails the default behavior is that the playbook will stop for that host.
So even if you have several play in the playbook they will not run that host if one of the task before has failed.

So you wouldn't need to do anything to avoid the second play to run as long as the snapshot is on the same server as the Ansible code is going to run.

But lets say you have a database server(db-server) and an application server(app-server).
Before upgrading the app-server a snapshot of the db-server need to be taken so a rollback can be taken since the upgrade of app-server also changes the database template.
In this kind of scenarios you'll need to do some checking.

But if you have disabled the holt on errors you can use this to check

- debug: msg="Will run if task failed"
  when: status_snapshot.failed == true

>> - hosts: staging
>> tasks:
>> - name: run yum -y update
>> yum:
>> name: "*"
>> state: latest
>
> Variables in one play is available in subsequent play in the same playbook.
> So you can then use the variable status_snapshot to decide what to do in this play.

Could you give an example how to evaluate the variable, please?

- hosts: vm1,vm2,vm3
  tasks:
    - name: Create random number from 1 to 2
      command: shuf -i 1-2 -n 1
      register: result

- hosts: vm1,vm2,vm3
  tasks:
    - debug: msg="{{ inventory_hostname}} found the correct number, the number was 2"
      when: result.stdout == "2"

This will run the command shuf on all vms, but in the second task in the other play it will check the result of shuf.
If shuf has returned the number 2 then the debug task will run on only that host, the other this task will be skipped.