VSphere VM's Expand Root Volume - Ansible

Hi team,
i am trying to increase the disk size of my VM and increate my root volume by automation. my VM is in VSphere abe belwo is my ansible playook. using my playbook i have increased the disk size but i am failling to partition and all.
My observation is, i am failing on Rescan SCSI bus to detect new disk size like after increasing the disk size also i cant see the new space under sda2 in lsblk out put still its showing old size only can anyone guide me

i am using rocky8 Linux

my playbook:

### PHASE 1: STORAGE ANALYSIS ###
- name: Gather current storage information
  block:
    - name: Get root filesystem usage
      command: df -h /
      register: root_fs_usage
      changed_when: false

    - name: Capture disk information
      command: lsblk -o NAME,SIZE,TYPE,MOUNTPOINT -p
      register: lsblk_output
      changed_when: false

    - name: Display current storage status
      debug:
        msg: |
          Current Root Filesystem Status:
          - Device: {{ root_fs_usage.stdout_lines[1].split()[0] }}
          - Size: {{ root_fs_usage.stdout_lines[1].split()[1] }}
          - Used: {{ root_fs_usage.stdout_lines[1].split()[2] }}
          - Available: {{ root_fs_usage.stdout_lines[1].split()[3] }}
          - Use%: {{ root_fs_usage.stdout_lines[1].split()[4] }}

    - name: Gather mount facts
      setup:
        filter: ansible_mounts
      register: mounts

    - name: Extract root_fs
      set_fact:
        root_fs: "{{ (mounts.ansible_facts.ansible_mounts | selectattr('mount', '==', '/') | first).device }}"
      when: mounts.ansible_facts.ansible_mounts

    - name: Extract root filesystem details
      set_fact:
        root_vg: "{{ root_fs.split('/')[-1].split('-')[0] if 'mapper' in root_fs else '' }}"
        root_lv: "{{ root_fs.split('/')[-1].split('-')[1] if 'mapper' in root_fs else '' }}"
        filesystem_type: "{{ (mounts.ansible_facts.ansible_mounts | selectattr('mount', '==', '/') | first).fstype }}"
        current_root_size_gb: "{{ (root_fs_usage.stdout_lines[1].split()[1] | regex_replace('G','')) | float }}"
      when: mounts.ansible_facts.ansible_mounts

    - name: Validate LVM configuration
      fail:
        msg: "Root filesystem is not on LVM. Found device '{{ root_fs }}'. This playbook requires LVM configuration."
      when: "'mapper' not in root_fs"

    - name: Calculate disk_expansion_needed
      set_fact:
        disk_expansion_needed: "{{ desired_root_size | float - current_root_size_gb | float }}"

  tags: analysis

### PHASE 2: DECISION MAKING ###
- name: Determine if expansion is needed
  block:
    - name: Set expansion flag
      set_fact:
        needs_expansion: "{{ (current_root_size_gb | float) < (desired_root_size | float * 0.9) }}"
    
    - name: Check if already at desired size
      set_fact:
        needs_expansion: false
      when: current_root_size_gb | float >= desired_root_size | float

    - name: Debug expansion decision
      debug:
        msg: |
          Expansion Decision:
          - Current Root Size: {{ current_root_size_gb }}GB
          - Desired Root Size: {{ desired_root_size }}GB
          - Needs Expansion: {{ needs_expansion | default('false') }}
          - Additional Space Needed: {{ disk_expansion_needed }}GB
      when: needs_expansion is defined

    - name: Skip if no expansion needed
      debug:
        msg: "No expansion needed - current root size is {{ current_root_size_gb }}GB"
      when: not needs_expansion | default(false) | bool
  tags: decision

### PHASE 3: DISK EXPANSION (Conditional) ###
- name: Expand vSphere disk (if needed)
  when: needs_expansion | default(false) | bool
  block:
    - name: Calculate current_disk_size
      set_fact:
        current_disk_size: "{{ (lsblk_output.stdout | regex_search('/dev/sda\\s+(\\d+)G', '\\1') | first) | int }}"

    - name: Calculate new disk size
      set_fact:
        new_disk_size: "{{ (current_disk_size | int) + (disk_expansion_needed | int) }}"

    - name: Calculate final disk size with safety cap
      set_fact:
        final_disk_size: "{{ [(new_disk_size | int), (safety_cap_gb | default(500) | int)] | min }}"

    - name: Debug disk expansion parameters
      debug:
        msg: |
          Disk Expansion Parameters:
          - Current Disk Size: {{ current_disk_size }}GB
          - Additional Space Needed: {{ disk_expansion_needed }}GB
          - New Disk Size: {{ final_disk_size }}GB

    - name: Gather disk info from virtual machine using name
      community.vmware.vmware_guest_disk_info:
        hostname: "{{ target_vcenter.hostname }}"
        username: "{{ target_vcenter.username }}"
        password: "{{ target_vcenter.password }}"
        datacenter: "{{ vm_datacenter }}"
        name: "{{ vm_search_name | default(inventory_hostname) }}"
      delegate_to: localhost
      become: false
      register: disk_info

    - name: Validate disk info
      assert:
        that:
          - disk_info.guest_disk_info is defined
          - disk_info.guest_disk_info['0'] is defined
        fail_msg: "Could not retrieve disk controller information"
      when: disk_info is defined

    - name: Perform disk expansion (VMware)
      community.vmware.vmware_guest_disk:
        hostname: "{{ target_vcenter.hostname }}"
        username: "{{ target_vcenter.username }}"
        password: "{{ target_vcenter.password }}"
        datacenter: "{{ vm_datacenter }}"
        validate_certs: no
        name: "{{ vm_search_name | default(inventory_hostname) }}"
        disk:
          - size_gb: "{{ final_disk_size }}"
            unit_number: 0
            controller_type: "{{ disk_info.guest_disk_info['0'].controller_type }}"
            controller_number: "{{ disk_info.guest_disk_info['0'].controller_bus_number }}"
      delegate_to: localhost
      become: false
      async: 300
      poll: 5
      register: disk_resize
      until: disk_resize is succeeded
      retries: "{{ (max_retry_attempts | default(3)) | int }}"
      delay: "{{ (retry_delay_seconds | default(10)) | int }}"
      when:
        - target_vcenter is defined
        - vm_datacenter is defined
        - disk_info is defined

    - name: Debug disk expansion result
      debug:
        var: disk_resize
      when: disk_resize is defined

    - name: Verify disk was expanded
      command: lsblk -o SIZE -n -p /dev/sda
      register: new_disk_check
      changed_when: false
  tags: disk_expansion

### PHASE 5: OS-LEVEL STORAGE EXPANSION ###
- name: Perform storage expansion at OS level (if needed)
  when: needs_expansion | default(false) | bool
  block:
    - name: Install required packages
      package:
        name: ['cloud-utils-growpart']
        state: present

    - name: Rescan SCSI bus to detect new disk size
      command: |
        echo 1 > /sys/class/block/sda/device/rescan
      changed_when: false

    - name: Wait for rescan to complete
      pause:
        seconds: 2

    - name: Read device information
      community.general.parted:
        device: /dev/sda
        unit: GB
      register: device_info

    - name: Capture disk information
      command: lsblk -o NAME,SIZE,TYPE,MOUNTPOINT -p
      register: lsblk_output
      changed_when: false

    - name: Calculate current_disk_size
      set_fact:
        new_added_disk_size: "{{ (lsblk_output.stdout | regex_search('/dev/sda\\s+(\\d+)G', '\\1') | first) | int }}"

    - name: faile if added disk size not reflected 
      fail:
        msg: Disk scan fail
      when: new_added_disk_size == current_disk_size

    - name: Expand partition (using best available method)
      block:
        - name: Try growpart first (more reliable for online resizing)
          command: growpart /dev/sda 2
          args:
            creates: /tmp/growpart_complete
          register: growpart_result
          ignore_errors: yes
          changed_when: "'CHANGED' in growpart_result.stdout"

        - name: Fallback to parted if growpart fails
          community.general.parted:
            device: /dev/sda
            number: 2
            part_end: "100%"
            resize: true
            state: present
          when: growpart_result is failed or 'NOCHANGE' in growpart_result.stdout
          register: parted_result

    - name: Refresh partition table
      command: partprobe /dev/sda

    - name: Resize physical volume
      command: pvresize /dev/sda2
      changed_when: true

    - name: Check available space
      command: vgs --units g -o vg_free "{{ root_vg }}"
      register: vg_free
      changed_when: false

    - name: Extend logical volume
      command: lvextend -l +100%FREE /dev/{{ root_vg }}/{{ root_lv }}
      when: vg_free.stdout | float > 0
      
    - name: Verify extension
      command: lvs --units g -o lv_size /dev/{{ root_vg }}/{{ root_lv }}
      register: lv_size
      changed_when: false
      failed_when: "(lv_size.stdout | float) < {{ desired_root_size | float }}"

    - name: Expand filesystem (conditional by type)
      block:
        - name: Expand XFS filesystem (online)
          command: xfs_growfs "{{ root_fs }}"
          when: filesystem_type == "xfs"

        - name: Expand ext4 filesystem
          command: resize2fs /dev/{{ root_vg }}/{{ root_lv }}
          when: filesystem_type == "ext4"

        - name: Handle unsupported filesystem
          fail:
            msg: "Unsupported filesystem type '{{ filesystem_type }}'. Only xfs and ext4 are supported."
          when: filesystem_type not in ["xfs", "ext4"]
  tags: os_expansion

### PHASE 6: VERIFICATION ###
- name: Verify expansion results
  block:
    - name: Verify new space
      command: df -h /
      register: df_output
      changed_when: false
      when: needs_expansion | default(false) | bool

    - name: Verify final size
      assert:
        that:
          - "(df_output.stdout_lines[1].split()[1] | regex_replace('G','')) | float >= desired_root_size | float"
        fail_msg: "Failed to reach desired size of {{ desired_root_size }}GB"
        success_msg: "Successfully expanded to {{ (df_output.stdout_lines[1].split()[1] | regex_replace('G','')) | float }}GB"
      when: needs_expansion | default(false) | bool

    - name: Show final result
      debug:
        msg: |
          [SUCCESS] Root filesystem expansion complete
          Previous size: {{ current_root_size_gb }}GB
          New size: {{ (df_output.stdout_lines[1].split()[1] | regex_replace('G','')) | float }}GB
          Desired size: {{ desired_root_size }}GB
          Filesystem type: {{ filesystem_type }}
      when: 
        - needs_expansion | default(false) | bool
        - df_output is defined
  tags: verification

my output:

TASK [Get root filesystem usage] ***********************************************
task path: /runner/project/tasks/expand_root.yaml:4
ok: [control-ho-a02q.sys.comcast.net] => {"changed": false, "cmd": ["df", "-h", "/"], "delta": "0:00:00.003332", "end": "2025-06-16 22:46:02.629258", "msg": "", "rc": 0, "start": "2025-06-16 22:46:02.625926", "stderr": "", "stderr_lines": [], "stdout": "Filesystem                       Size  Used Avail Use% Mounted on\\n/dev/mapper/RootVolGroup00-root  5.0G  3.7G  1.4G  73% /", "stdout_lines": ["Filesystem                       Size  Used Avail Use% Mounted on", "/dev/mapper/RootVolGroup00-root  5.0G  3.7G  1.4G  73% /"]}

TASK [Capture disk information] ************************************************
task path: /runner/project/tasks/expand_root.yaml:9
ok: [control-ho-a02q.sys.comcast.net] => {"changed": false, "cmd": ["lsblk", "-o", "NAME,SIZE,TYPE,MOUNTPOINT", "-p"], "delta": "0:00:00.006751", "end": "2025-06-16 22:46:04.614914", "msg": "", "rc": 0, "start": "2025-06-16 22:46:04.608163", "stderr": "", "stderr_lines": [], "stdout": "NAME                                SIZE TYPE MOUNTPOINT\\n/dev/sda                             40G disk \\nβ”œβ”€/dev/sda1                           1G part /boot\\n└─/dev/sda2                          39G part \\n  β”œβ”€/dev/mapper/RootVolGroup00-root   5G lvm  /\\n  β”œβ”€/dev/mapper/RootVolGroup00-swap   4G lvm  \\n  β”œβ”€/dev/mapper/RootVolGroup00-tmp    1G lvm  /tmp\\n  β”œβ”€/dev/mapper/RootVolGroup00-home   5G lvm  /home\\n  β”œβ”€/dev/mapper/RootVolGroup00-var   12G lvm  /var\\n  └─/dev/mapper/RootVolGroup00-opt   12G lvm  /opt", "stdout_lines": ["NAME                                SIZE TYPE MOUNTPOINT", "/dev/sda                             40G disk ", "β”œβ”€/dev/sda1                           1G part /boot", "└─/dev/sda2                          39G part ", "  β”œβ”€/dev/mapper/RootVolGroup00-root   5G lvm  /", "  β”œβ”€/dev/mapper/RootVolGroup00-swap   4G lvm  ", "  β”œβ”€/dev/mapper/RootVolGroup00-tmp    1G lvm  /tmp", "  β”œβ”€/dev/mapper/RootVolGroup00-home   5G lvm  /home", "  β”œβ”€/dev/mapper/RootVolGroup00-var   12G lvm  /var", "  └─/dev/mapper/RootVolGroup00-opt   12G lvm  /opt"]}

TASK [Display current storage status] ******************************************
task path: /runner/project/tasks/expand_root.yaml:14
ok: [control-ho-a02q.sys.comcast.net] => {
    "msg": "Current Root Filesystem Status:\\n- Device: /dev/mapper/RootVolGroup00-root\\n- Size: 5.0G\\n- Used: 3.7G\\n- Available: 1.4G\\n- Use%: 73%\\n"
}

TASK [Gather mount facts] ******************************************************
task path: /runner/project/tasks/expand_root.yaml:24
ok: [control-ho-a02q.sys.comcast.net]

TASK [Extract root_fs] *********************************************************
task path: /runner/project/tasks/expand_root.yaml:29
ok: [control-ho-a02q.sys.comcast.net] => {"ansible_facts": {"root_fs": "/dev/mapper/RootVolGroup00-root"}, "changed": false}

TASK [Extract root filesystem details] *****************************************
task path: /runner/project/tasks/expand_root.yaml:34
ok: [control-ho-a02q.sys.comcast.net] => {"ansible_facts": {"current_root_size_gb": "5.0", "filesystem_type": "xfs", "root_lv": "root", "root_vg": "RootVolGroup00"}, "changed": false}

TASK [Validate LVM configuration] **********************************************
task path: /runner/project/tasks/expand_root.yaml:42
skipping: [control-ho-a02q.sys.comcast.net] => {"changed": false, "skip_reason": "Conditional result was False"}

TASK [Calculate disk_expansion_needed] *****************************************
task path: /runner/project/tasks/expand_root.yaml:47
ok: [control-ho-a02q.sys.comcast.net] => {"ansible_facts": {"disk_expansion_needed": "15.0"}, "changed": false}

TASK [Set expansion flag] ******************************************************
task path: /runner/project/tasks/expand_root.yaml:56
ok: [control-ho-a02q.sys.comcast.net] => {"ansible_facts": {"needs_expansion": true}, "changed": false}

TASK [Check if already at desired size] ****************************************
task path: /runner/project/tasks/expand_root.yaml:60
skipping: [control-ho-a02q.sys.comcast.net] => {"changed": false, "skip_reason": "Conditional result was False"}

TASK [Debug expansion decision] ************************************************
task path: /runner/project/tasks/expand_root.yaml:65
ok: [control-ho-a02q.sys.comcast.net] => {
    "msg": "Expansion Decision:\\n- Current Root Size: 5.0GB\\n- Desired Root Size: 20GB\\n- Needs Expansion: True\\n- Additional Space Needed: 15.0GB\\n"
}

TASK [Skip if no expansion needed] *********************************************
task path: /runner/project/tasks/expand_root.yaml:75
skipping: [control-ho-a02q.sys.comcast.net] => {}

TASK [Calculate current_disk_size] *********************************************
task path: /runner/project/tasks/expand_root.yaml:85
ok: [control-ho-a02q.sys.comcast.net] => {"ansible_facts": {"current_disk_size": "40"}, "changed": false}

TASK [Calculate new disk size] *************************************************
task path: /runner/project/tasks/expand_root.yaml:89
ok: [control-ho-a02q.sys.comcast.net] => {"ansible_facts": {"new_disk_size": "55"}, "changed": false}

TASK [Calculate final disk size with safety cap] *******************************
task path: /runner/project/tasks/expand_root.yaml:93
ok: [control-ho-a02q.sys.comcast.net] => {"ansible_facts": {"final_disk_size": "55"}, "changed": false}

TASK [Debug disk expansion parameters] *****************************************
task path: /runner/project/tasks/expand_root.yaml:97
ok: [control-ho-a02q.sys.comcast.net] => {
    "msg": "Disk Expansion Parameters:\\n- Current Disk Size: 40GB\\n- Additional Space Needed: 15.0GB\\n- New Disk Size: 55GB\\n"
}

TASK [Gather disk info from virtual machine using name] ************************
task path: /runner/project/tasks/expand_root.yaml:105
ok: [control-ho-a02q.sys.comcast.net -> localhost] => {"changed": false, "guest_disk_info": {"0": {"backing_datastore": "HO1-Workload-vSAN-Datastore", "backing_disk_mode": "persistent", "backing_diskmode": "persistent", "backing_eagerlyscrub": false, "backing_filename": "[HO1-Workload-vSAN-Datastore] 642aec66-d420-e42e-0112-84160c74e3f0/control-ho-a02q_2.vmdk", "backing_sharing": "sharingNone", "backing_thinprovisioned": true, "backing_type": "FlatVer2", "backing_uuid": "6000C293-a7ec-bd36-3fa9-47e382e09b36", "backing_writethrough": false, "capacity_in_bytes": 42949672960, "capacity_in_kb": 41943040, "controller_bus_number": 0, "controller_key": 1000, "controller_type": "paravirtual", "iolimit_limit": -1, "iolimit_shares_level": "normal", "iolimit_shares_limit": 1000, "key": 2000, "label": "Hard disk 1", "shares_level": "normal", "shares_limit": 1000, "summary": "41,943,040 KB", "unit_number": 0}}}

TASK [Validate disk info] ******************************************************
task path: /runner/project/tasks/expand_root.yaml:116
ok: [control-ho-a02q.sys.comcast.net] => {
    "changed": false,
    "msg": "All assertions passed"
}

TASK [Perform disk expansion (VMware)] *****************************************
task path: /runner/project/tasks/expand_root.yaml:124
ASYNC OK on control-ho-a02q.sys.comcast.net: jid=j383567966169.196
changed: [control-ho-a02q.sys.comcast.net -> localhost] => {"ansible_job_id": "j383567966169.196", "attempts": 1, "changed": true, "disk_changes": {"0": "Disk reconfigured."}, "disk_data": {"0": {"backing_datastore": "HO1-Workload-vSAN-Datastore", "backing_disk_mode": "persistent", "backing_diskmode": "persistent", "backing_eagerlyscrub": false, "backing_filename": "[HO1-Workload-vSAN-Datastore] 642aec66-d420-e42e-0112-84160c74e3f0/control-ho-a02q_2.vmdk", "backing_sharing": "sharingNone", "backing_thinprovisioned": true, "backing_type": "FlatVer2", "backing_uuid": "6000C293-a7ec-bd36-3fa9-47e382e09b36", "backing_writethrough": false, "capacity_in_bytes": 59055800320, "capacity_in_kb": 57671680, "controller_bus_number": 0, "controller_key": 1000, "controller_type": "paravirtual", "iolimit_limit": -1, "iolimit_shares_level": "normal", "iolimit_shares_limit": 1000, "key": 2000, "label": "Hard disk 1", "shares_level": "normal", "shares_limit": 1000, "summary": "57,671,680 KB", "unit_number": 0}}, "finished": 1, "results_file": "/runner/.ansible_async/j383567966169.196", "started": 1, "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}

TASK [Debug disk expansion result] *********************************************
task path: /runner/project/tasks/expand_root.yaml:150
ok: [control-ho-a02q.sys.comcast.net] => {
    "disk_resize": {
        "ansible_job_id": "j383567966169.196",
        "attempts": 1,
        "changed": true,
        "disk_changes": {
            "0": "Disk reconfigured."
        },
        "disk_data": {
            "0": {
                "backing_datastore": "HO1-Workload-vSAN-Datastore",
                "backing_disk_mode": "persistent",
                "backing_diskmode": "persistent",
                "backing_eagerlyscrub": false,
                "backing_filename": "[HO1-Workload-vSAN-Datastore] 642aec66-d420-e42e-0112-84160c74e3f0/control-ho-a02q_2.vmdk",
                "backing_sharing": "sharingNone",
                "backing_thinprovisioned": true,
                "backing_type": "FlatVer2",
                "backing_uuid": "6000C293-a7ec-bd36-3fa9-47e382e09b36",
                "backing_writethrough": false,
                "capacity_in_bytes": 59055800320,
                "capacity_in_kb": 57671680,
                "controller_bus_number": 0,
                "controller_key": 1000,
                "controller_type": "paravirtual",
                "iolimit_limit": -1,
                "iolimit_shares_level": "normal",
                "iolimit_shares_limit": 1000,
                "key": 2000,
                "label": "Hard disk 1",
                "shares_level": "normal",
                "shares_limit": 1000,
                "summary": "57,671,680 KB",
                "unit_number": 0
            }
        },
        "failed": false,
        "finished": 1,
        "results_file": "/runner/.ansible_async/j383567966169.196",
        "started": 1,
        "stderr": "",
        "stderr_lines": [],
        "stdout": "",
        "stdout_lines": []
    }
}

TASK [Verify disk was expanded] ************************************************
task path: /runner/project/tasks/expand_root.yaml:155
ok: [control-ho-a02q.sys.comcast.net] => {"changed": false, "cmd": ["lsblk", "-o", "SIZE", "-n", "-p", "/dev/sda"], "delta": "0:00:00.004976", "end": "2025-06-16 22:46:19.775843", "msg": "", "rc": 0, "start": "2025-06-16 22:46:19.770867", "stderr": "", "stderr_lines": [], "stdout": " 40G\\n  1G\\n 39G\\n  5G\\n  4G\\n  1G\\n  5G\\n 12G\\n 12G", "stdout_lines": [" 40G", "  1G", " 39G", "  5G", "  4G", "  1G", "  5G", " 12G", " 12G"]}

TASK [Install required packages] ***********************************************
task path: /runner/project/tasks/expand_root.yaml:165
changed: [control-ho-a02q.sys.comcast.net] => {"changed": true, "msg": "", "rc": 0, "results": ["Installed: cloud-utils-growpart-0.33-0.el8.noarch"]}

TASK [Rescan SCSI bus to detect new disk size] *********************************
task path: /runner/project/tasks/expand_root.yaml:170
ok: [control-ho-a02q.sys.comcast.net] => {"changed": false, "cmd": ["echo", "1", ">", "/sys/class/block/sda/device/rescan"], "delta": "0:00:00.002668", "end": "2025-06-16 22:46:26.340874", "msg": "", "rc": 0, "start": "2025-06-16 22:46:26.338206", "stderr": "", "stderr_lines": [], "stdout": "1 > /sys/class/block/sda/device/rescan", "stdout_lines": ["1 > /sys/class/block/sda/device/rescan"]}

TASK [Wait for rescan to complete] *********************************************
task path: /runner/project/tasks/expand_root.yaml:175
Pausing for 2 seconds
(ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort)
ok: [control-ho-a02q.sys.comcast.net] => {"changed": false, "delta": 2, "echo": true, "rc": 0, "start": "2025-06-16 22:46:26.643822", "stderr": "", "stdout": "Paused for 2.0 seconds", "stop": "2025-06-16 22:46:28.644000", "user_input": ""}

TASK [Read device information] *************************************************
task path: /runner/project/tasks/expand_root.yaml:179
ok: [control-ho-a02q.sys.comcast.net] => {"changed": false, "disk": {"dev": "/dev/sda", "logical_block": 512, "model": "VMware Virtual disk", "physical_block": 512, "size": 42.9, "table": "msdos", "unit": "gb"}, "partitions": [{"begin": 0.0, "end": 1.07, "flags": ["boot"], "fstype": "ext4", "name": "", "num": 1, "size": 1.07, "unit": "gb"}, {"begin": 1.07, "end": 42.9, "flags": ["lvm"], "fstype": "", "name": "", "num": 2, "size": 41.9, "unit": "gb"}], "script": "unit 'GB' print"}

TASK [Capture disk information] ************************************************
task path: /runner/project/tasks/expand_root.yaml:185
ok: [control-ho-a02q.sys.comcast.net] => {"changed": false, "cmd": ["lsblk", "-o", "NAME,SIZE,TYPE,MOUNTPOINT", "-p"], "delta": "0:00:00.006648", "end": "2025-06-16 22:46:32.744396", "msg": "", "rc": 0, "start": "2025-06-16 22:46:32.737748", "stderr": "", "stderr_lines": [], "stdout": "NAME                                SIZE TYPE MOUNTPOINT\\n/dev/sda                             40G disk \\nβ”œβ”€/dev/sda1                           1G part /boot\\n└─/dev/sda2                          39G part \\n  β”œβ”€/dev/mapper/RootVolGroup00-root   5G lvm  /\\n  β”œβ”€/dev/mapper/RootVolGroup00-swap   4G lvm  \\n  β”œβ”€/dev/mapper/RootVolGroup00-tmp    1G lvm  /tmp\\n  β”œβ”€/dev/mapper/RootVolGroup00-home   5G lvm  /home\\n  β”œβ”€/dev/mapper/RootVolGroup00-var   12G lvm  /var\\n  └─/dev/mapper/RootVolGroup00-opt   12G lvm  /opt", "stdout_lines": ["NAME                                SIZE TYPE MOUNTPOINT", "/dev/sda                             40G disk ", "β”œβ”€/dev/sda1                           1G part /boot", "└─/dev/sda2                          39G part ", "  β”œβ”€/dev/mapper/RootVolGroup00-root   5G lvm  /", "  β”œβ”€/dev/mapper/RootVolGroup00-swap   4G lvm  ", "  β”œβ”€/dev/mapper/RootVolGroup00-tmp    1G lvm  /tmp", "  β”œβ”€/dev/mapper/RootVolGroup00-home   5G lvm  /home", "  β”œβ”€/dev/mapper/RootVolGroup00-var   12G lvm  /var", "  └─/dev/mapper/RootVolGroup00-opt   12G lvm  /opt"]}

TASK [Calculate current_disk_size] *********************************************
task path: /runner/project/tasks/expand_root.yaml:190
ok: [control-ho-a02q.sys.comcast.net] => {"ansible_facts": {"new_added_disk_size": "40"}, "changed": false}

TASK [faile if added disk size not reflected] **********************************
task path: /runner/project/tasks/expand_root.yaml:194
fatal: [control-ho-a02q.sys.comcast.net]: FAILED! => {"changed": false, "msg": "Disk scan fail"}

PLAY RECAP *********************************************************************
control-ho-a02q.sys.comcast.net : ok=36   changed=4    unreachable=0    failed=1    skipped=3    rescued=0    ignored=0   

Oh boy. You are in for a lot of pain. My first guess would be that you are using a wrong command to rescan the device:

echo 1 > /sys/class/block/sda/device/rescan

I think this does not do what you expect. You have to rescan the scsi bus:

echo 1 > /sys/class/scsi_device/0\:0\:0\:0/device/rescan

where 0:0:0:0 is the actual scsi bus address you have to determine.

I’ve also seen cases where rescan just does not help and reboot is required.

Aside from that, I would like to give you a bit of advice when disk resizing in virtualized environment is in question:

Use you own VM templates without LVM. Just a single partition on a root disk or if you want multiple partitions like boot, swap etc. make the root partition the last. If you do this, you can leverage cloud-init to your benefit. After expanding the disk, you only need to restart the VM. Cloud-init will expand your root fs at boot time - safe and simple.

As for the playbook itself, if you are going to just call a bunch of shell commands, why not just make a shell script that does everything needed? OK, community.vmware.vmware_guest_disk has to be called from Ansible but the rest is just a bunch of shell commands. You’ll have more flexibility to manipulate the data, real loops and conditionals etc. inside a shell script then to do the same with Ansible tasks.

Hi @bvitnik,

Thank you for your response, I’ve already tried the echo 1 > /sys/class/scsi_device/0\:0\:0\:0/device/rescan command, but I didn’t reboot afterward. I’ll give that a shot now. I’m also planning to write a script to handle this automatically.

Regarding the VM template, I built my own using Packer instead of cloud-init. I’ll revisit that as well. If you have any template creation code or script links, I’d really appreciate it if you could share them.

Thanks again!

One more advice is to try to do the expansion manually buy following each of your commands one by one. That way you can see which command does not do it’s work. My guess is still scsi rescan.

Packer is fine and cloud-init goes along with it just fine. Regardless of how you create your VM templates, be it manually, Packer, pure Ansible (like me) or something else, disk partitioning is still your choice… at least in the case when you are installing the OS from ISO installer.

Unfortunately, I have nothing I can share publicly.

1 Like

when i use scsi command with reboot its worked thanks!

But why reboot? You don’t have to call any command related to scsi if you are rebooting.

Yes,
sorry i use reboot along now. as i was tried below two command in playbook its not working but when i am doing manually its scanning the space. i am confusing so as this is part of my patching anyway need a reboot so i used that. still trying the other way

echo 1 > /sys/class/block/sda/device/rescan this is also working when i did manually
echo 1 > /sys/class/scsi_device/0\:0\:0\:0/device/rescan

Aha, OK. This could be because you are using Ansible command module. command module does not support output redirection (β€œ>” in your case). Try changing command to shell.

Take a look at the notes at the bottom of:

In other words, your scsi rescan commands were failing silently when run by Ansible.

I haven’t done something like this for a while, but IIRC I only had to reboot when increasing the disk that Linux booted from. Which seems to be the case here since /dev/sda is usually the boot disk. For other disks, I was able to get the new size without a reboot.

Don’t take this as true without checking yourself. Maybe I remember this wrong. Or this changed for more recent Linux versions. As I’ve said, I haven’t done this for quite some time.

Ho, i did not notice that Thank you,
now its working without reboot after switching. to shell

@mariolenz in recent years, disk expansion, re-scanning, re-partitioning and file system expansion on VMware can be done live even for boot (or root) disks. Those are basically a day to day operations in a company I work for. There are some rare cases when something does not work live but it’s fine in most cases.

1 Like

Hi @bvitnik,

Could you please guide me on an intermittent issue I’m facing?

We’re using AWX 24.6.0 with the execution environment core 2.14.11. I’ve looked into this issue and found suggestions that it might require changes in ansible.cfg, specifically remote_tmp.

The problem is that I’m unable to create an ansible.cfg file in the root directory of the project, as it’s based on someone else’s repository.

Your help would be greatly appreciated.

error:

unreachable: true
msg: >-
  Failed to create temporary directory. In some cases, you may have been able to
  authenticate and did not have permissions on the target directory. Consider
  changing the remote tmp path in ansible.cfg to a path rooted in "/tmp", for
  more error information use -vvv. Failed command was: ( umask 77 && mkdir -p "`
  echo /home/efv-ansible/.ansible/tmp `"&& mkdir "` echo
  /home/efv-ansible/.ansible/tmp/ansible-tmp-1750185297.6444545-66-40004816797886
  `" && echo ansible-tmp-1750185297.6444545-66-40004816797886="` echo
  /home/efv-ansible/.ansible/tmp/ansible-tmp-1750185297.6444545-66-40004816797886
  `" ), exited with result 1
changed: false

I have very limited experience with AWX so you will have to open a separate thread for this issue but this looks like an issue with the end host itself. Maybe you are using become with unprivileged user?

Anyway, custom ansible.cfg can also be baked into EE if you are willing to create your custom EE. An alternative is to pass an environment variable to EE but I’m having trouble finding the environment variable equivalent of remote_tmp… even worse, I can’t find anything about remote_tmp in the Ansible documentation, only local_tmp :thinking:.

1 Like