gcp_compute_instance module not attaching persistent disk to VM

Hi All,

I’m working on a playbook that creates a new persistent disk in google cloud, attaches to an existing VM, formats, mounts and sets it up. Everything seems to be working fine except the step where the disk gets attached, it seems ansible is ignoring that VM state change.

I tried to simply call the existing VM again with the new disk, which didn’t work. Then I thought it could be a problem with the fact that the other existing disks were not declared. So I wrote some tasks to gather the disks, create dictionaries that capture the disk properties as they are required by the “disks” parameter of the gcp_compute_instance module, which is how the below playbook is currently set up. But that didn’t work either.

I’ve also tried to create tasks before and after the attempt to attach the disk by setting the VM status to TERMINATED and RUNNING, to rule out the possibility of google cloud not accepting attachments with the machine running (although this is OK doing in the console).

Anything I might be missing here?

I noticed the azure_rm_managed_disk has a parameter called managed_by. Maybe the GCP modules are yet to implement such functionality?

Playbook: scratch.yml

---
- name: Add a scratch disk to an existing VM
  hosts: localhost
  gather_facts: no
  connection: local

  vars_files: 
    - vars/gcp_vm

  tasks:
  - name: Provision scratch and attach to VM
    when: destroy == "no"

    block:
      - name: Getting facts from VM
        gcp_compute_instance_facts:
          zone: "{{ zone }}"
          filters:
          - name = {{ instance_name }}
          project: "{{ gcp_project }}"
          auth_kind: "{{ gcp_cred_kind }}"
          service_account_file: "{{ gcp_cred_file }}"
        register: vm

      - name: Gathering VMs existing disks
        set_fact:
          disks: "{{ vm['items'][0]['disks'] }}"

      - name: Create scratch disk
        gcp_compute_disk:
          name: "{{ scratch_disk_name }}"
          type: "{{ scratch_disk_type }}"
          zone: "{{ zone }}"
          project: "{{ gcp_project }}"
          auth_kind: "{{ gcp_cred_kind }}"
          service_account_file: "{{ gcp_cred_file }}"
          state: present
        register: scratch

      - name: Convert new disk to format expected by gcp_compute_instance_facts
        vars: 
          newdisk: []
        set_fact:
          newdisk: "{{ newdisk + [ {'auto_delete': false, 'boot': false, 'source': item[1]} ] }}"
        with_indexed_items: "{{ [ scratch ] }}"

      - name: Create list of currently attached disks
        vars:
          alldisks: []
        set_fact:
          alldisks: "{{ alldisks + [ {'auto_delete': item['autoDelete'], 'boot': item['boot'], 'source': item} ] }}"
        with_items: "{{ disks }}"

      - name: Append new disk
        set_fact:
          alldisks: "{{ alldisks + [ newdisk[0] ] }}"

      - name: Attach disk to VM
        gcp_compute_instance:
          state: present
          name: "{{ instance_name }}"
          disks: "{{ alldisks }}"
          zone: "{{ zone }}"
          project: "{{ gcp_project }}"
          auth_kind: "{{ gcp_cred_kind }}"
          service_account_file: "{{ gcp_cred_file }}"

      - name: Get IP address from instance
        gcp_compute_address:
          name: "{{ instance_name }}"
          region: "{{ region }}"
          project: "{{ gcp_project }}"
          auth_kind: "{{ gcp_cred_kind }}"
          service_account_file: "{{ gcp_cred_file }}"
          state: present
        register: address

      - name: Wait for SSH to come up
        wait_for: host={{ address.address }} port=22 delay=10 timeout=60

      - name: Add host to groupname
        add_host: hostname={{ address.address }} groupname=vm

- name: Format and mount scratch disk
  hosts: vm
  connection: ssh
  become: True
  vars_files: 
    - vars/consultancy
  roles: 
    - scratch

- name: Destroy scratch
  hosts: localhost
  gather_facts: no
  connection: local

  vars_files: 
    - vars/gcp_vm

  tasks:
    - name: Destroy disk
      when: destroy == "yes"
      gcp_compute_disk:
        name: "{{ scratch_disk_name }}"
        zone: "{{ zone }}"
        project: "{{ gcp_project }}"
        auth_kind: "{{ gcp_cred_kind }}"
        service_account_file: "{{ gcp_cred_file }}"
        state: absent

Playbook call:

ansible-playbook scratch.yml --extra-vars "instance_name=work scratch_disk_name=wave destroy=no scratch_pathname=/data/wave"

Result:

PLAY [Add a scratch disk to an existing VM] ************************************************************************************************************************************************

TASK [Getting facts from VM] ***************************************************************************************************************************************************************
ok: [localhost]

TASK [Gathering VMs existing disks] ********************************************************************************************************************************************************
ok: [localhost]

TASK [Create scratch disk] *****************************************************************************************************************************************************************
changed: [localhost]

TASK [Convert new disk to format expected by gcp_compute_instance_facts] *******************************************************************************************************************
ok: [localhost] =>

TASK [Create list of currently attached disks] *********************************************************************************************************************************************
ok: [localhost] => 

TASK [Append new disk] *********************************************************************************************************************************************************************
ok: [localhost]

TASK [Attach disk to VM] *******************************************************************************************************************************************************************
ok: [localhost]

TASK [Get IP address from instance] ********************************************************************************************************************************************************
ok: [localhost]

TASK [Wait for SSH to come up] *************************************************************************************************************************************************************
ok: [localhost]

TASK [Add host to groupname] ***************************************************************************************************************************************************************
changed: [localhost]

PLAY [Format and mount scratch disk] *******************************************************************************************************************************************************

TASK [Gathering Facts] *********************************************************************************************************************************************************************
ok: [35.196.130.236]

TASK [scratch : Format disk] ***************************************************************************************************************************************************************
fatal: [35.196.130.236]: FAILED! => {"changed": false, "msg": "Device /dev/disk/by-id/google-wave not found."}

PLAY RECAP *********************************************************************************************************************************************************************************
35.196.130.236             : ok=1    changed=0    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0
localhost                  : ok=10   changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

So the TASK “Attach disk to VM” returned OK, while I’d expect it to return CHANGED considering the declarative behaviour in ansible. If I attach the disk manually via GCP console, I re-run the playbook and it works.

Thank you!

Hello Rafael,

Did you find a fix for this issue? I think I am experiencing the same problem.

Best regards,
Dimitar

Hello Dimitar,

Nope, not yet, I haven’t revisited the issue in a while now.

Regards,
Rafael

Hello Rafael,

Just to let you know I have opened a bug report about this.

https://github.com/ansible/ansible/issues/68766

Best regards,