Bug/crash in the setup module in detecting virtual devices (md0).

So i just pulled the latest release of ansible, as it has a much needed feature for me, to get devices information ,

When i tried to run it against our inventory, it kept giving this stack trace, Then i compared against my code which was working. https://github.com/kavink/ansible/commit/d8036f0e28ec1b1a50af293b1f492de1866fd0da

And found out the upstream module was incorrectly detecting a virtual device as physical, So i checked in that fix and send a pull request: https://github.com/ansible/ansible/pull/2052

Copied below the stack trace and tree structure.

Extra info:

[kk@u1 ansible]$ ansible q3 -m setup -k -u root --tree=/tmp/facts SSH password: q3 | FAILED => failed to parse: /sys/block/md0 Traceback (most recent call last): File “/root/.ansible/tmp/ansible-1360629441.14-171498703486275/setup”, line 1797, in ? main() File “/root/.ansible/tmp/ansible-1360629441.14-171498703486275/setup”, line 1050, in main data = run_setup(module) File “/root/.ansible/tmp/ansible-1360629441.14-171498703486275/setup”, line 1000, in run_setup facts = ansible_facts() File “/root/.ansible/tmp/ansible-1360629441.14-171498703486275/setup”, line 990, in ansible_facts facts.update(Hardware().populate()) File “/root/.ansible/tmp/ansible-1360629441.14-171498703486275/setup”, line 312, in populate self.get_device_facts() File “/root/.ansible/tmp/ansible-1360629441.14-171498703486275/setup”, line 439, in get_device_facts m = re.match(“.?([(.)])”, scheduler) File “/usr/lib64/python2.4/sre.py”, line 129, in match return _compile(pattern, flags).match(string) TypeError: expected string or buffer

Tree structure:

[root@q3 tmp]# tree /sys/block/
/sys/block/

– md0

– dev
– holders
– md

– array_state
– chunk_size
– component_size
– layout
– level
– metadata_version
– new_dev
– raid_disks
– resync_start
-- safe_mode_delay -- range -- removable -- size -- slaves -- stat -- subsystem -> ../../block – uevent
– ram0
– dev
– holders
– range
– removable
– size
– slaves
– stat
– subsystem → …/…/block
-- uevent -- ram1 -- dev -- holders -- range -- removable -- size -- slaves -- stat -- subsystem -> ../../block – uevent
– ram10
– dev
– holders
– range
– removable
– size
– slaves
– stat
– subsystem → …/…/block
-- uevent -- ram11 -- dev -- holders -- range -- removable -- size -- slaves -- stat -- subsystem -> ../../block – uevent
– ram12
– dev
– holders
– range
– removable
– size
– slaves
– stat
– subsystem → …/…/block
-- uevent -- ram13 -- dev -- holders -- range -- removable -- size -- slaves -- stat -- subsystem -> ../../block – uevent
– ram14
– dev
– holders
– range
– removable
– size
– slaves
– stat
– subsystem → …/…/block
-- uevent -- ram15 -- dev -- holders -- range -- removable -- size -- slaves -- stat -- subsystem -> ../../block – uevent
– ram2
– dev
– holders
– range
– removable
– size
– slaves
– stat
– subsystem → …/…/block
-- uevent -- ram3 -- dev -- holders -- range -- removable -- size -- slaves -- stat -- subsystem -> ../../block – uevent
– ram4
– dev
– holders
– range
– removable
– size
– slaves
– stat
– subsystem → …/…/block
-- uevent -- ram5 -- dev -- holders -- range -- removable -- size -- slaves -- stat -- subsystem -> ../../block – uevent
– ram6
– dev
– holders
– range
– removable
– size
– slaves
– stat
– subsystem → …/…/block
-- uevent -- ram7 -- dev -- holders -- range -- removable -- size -- slaves -- stat -- subsystem -> ../../block – uevent
– ram8
– dev
– holders
– range
– removable
– size
– slaves
– stat
– subsystem → …/…/block
-- uevent -- ram9 -- dev -- holders -- range -- removable -- size -- slaves -- stat -- subsystem -> ../../block – uevent
– sda
– dev
– device → …/…/devices/pci0000:00/0000:00:1f.2/host0/target0:0:0/0:0:0:0
– holders
– queue
– iosched

– back_seek_max
– back_seek_penalty
– fifo_expire_async
– fifo_expire_sync
– quantum
– queued
– slice_async
– slice_async_rq
– slice_idle
-- slice_sync -- iostats -- max_hw_sectors_kb -- max_sectors_kb -- nr_requests -- read_ahead_kb – scheduler
– range
– removable
– sda1
– dev
– holders
– size
– start
– stat
– subsystem → …/…/…/block
-- uevent -- sda2 -- dev -- holders -- size -- start -- stat -- subsystem -> ../../../block – uevent
– sda3
– dev
– holders
– size
– start
– stat
– subsystem → …/…/…/block
-- uevent -- sda4 -- dev -- holders -- size -- start -- stat -- subsystem -> ../../../block – uevent
– sda5
– dev
– holders
– size
– start
– stat
– subsystem → …/…/…/block
-- uevent -- sda6 -- dev -- holders -- size -- start -- stat -- subsystem -> ../../../block – uevent
– sda7
– dev
– holders
– size
– start
– stat
– subsystem → …/…/…/block
-- uevent -- sda8 -- dev -- holders -- size -- start -- stat -- subsystem -> ../../../block – uevent
– sda9
– dev
– holders
– size
– start
– stat
– subsystem → …/…/…/block
-- uevent -- size -- slaves -- stat -- subsystem -> ../../block – uevent
– sdb
– dev
– device → …/…/devices/pci0000:00/0000:00:1f.2/host1/target1:0:0/1:0:0:0
– holders
– queue
– iosched
– back_seek_max
– back_seek_penalty
– fifo_expire_async
– fifo_expire_sync
– quantum
– queued
– slice_async
– slice_async_rq
– slice_idle
-- slice_sync -- iostats -- max_hw_sectors_kb -- max_sectors_kb -- nr_requests -- read_ahead_kb – scheduler
– range
– removable
– size
– slaves
– stat
– subsystem → …/…/block
-- uevent -- sdc -- dev -- device -> ../../devices/pci0000:00/0000:00:1f.2/host2/target2:0:0/2:0:0:0 -- holders -- queue -- iosched -- back_seek_max -- back_seek_penalty -- fifo_expire_async -- fifo_expire_sync -- quantum -- queued -- slice_async -- slice_async_rq -- slice_idle – slice_sync
– iostats
– max_hw_sectors_kb
– max_sectors_kb
– nr_requests
– read_ahead_kb
-- scheduler -- range -- removable -- size -- slaves -- stat -- subsystem -> ../../block – uevent
– sdd
– dev
– device → …/…/devices/pci0000:00/0000:00:1f.2/host4/target4:0:0/4:0:0:0
– holders
– queue
– iosched
– back_seek_max
– back_seek_penalty
– fifo_expire_async
– fifo_expire_sync
– quantum
– queued
– slice_async
– slice_async_rq
– slice_idle
-- slice_sync -- iostats -- max_hw_sectors_kb -- max_sectors_kb -- nr_requests -- read_ahead_kb – scheduler
– range
– removable
– size
– slaves
– stat
– subsystem → …/…/block
-- uevent – sde
– dev
– device → …/…/devices/pci0000:00/0000:00:1f.2/host5/target5:0:0/5:0:0:0
– holders
– queue
– iosched
– back_seek_max
– back_seek_penalty
– fifo_expire_async
– fifo_expire_sync
– quantum
– queued
– slice_async
– slice_async_rq
– slice_idle
-- slice_sync -- iostats -- max_hw_sectors_kb -- max_sectors_kb -- nr_requests -- read_ahead_kb – scheduler
– range
– removable
– size
– slaves
– stat
– subsystem → …/…/block
`-- uevent

131 directories, 267 files

After applying this patch:

here is the output:

https://gist.github.com/kavink/4759117

Thanks, will take a look...

We have the same issue on all of our redhat 5.x machines at this moment:

mmaas@xmgtansible:~/playbooks$ ansible -m setup aer2
sudo password:
aer2 | FAILED => failed to parse:
File “/home/mmaas/.ansible/tmp/ansible-1360656501.85-61557686672417/setup”, line 448
d[‘sectors’] = d[‘sectors’] if d[‘sectors’] else 0
^
SyntaxError: invalid syntax

[mmaas@aer2 ~]$ cat /etc/redhat-release
Red Hat Enterprise Linux Server release 5.7 (Tikanga)
[mmaas@aer2 ~]$ tree /sys/block/

/sys/block/

– dm-0

– dev
– holders
– range
– removable
– size
– slaves
-- sda2 -> ../../../block/sda/sda2 -- stat -- subsystem -> ../../block – uevent
– dm-1
– dev
– holders
– range
– removable
– size
– slaves
-- sda2 -> ../../../block/sda/sda2 -- stat -- subsystem -> ../../block – uevent
– dm-2
– dev
– holders
– range
– removable
– size
– slaves
-- sdb -> ../../../block/sdb -- stat -- subsystem -> ../../block – uevent
– fd0
– dev
– device → …/…/devices/platform/floppy.0
– holders
– queue

– iosched

– back_seek_max
– back_seek_penalty
– fifo_expire_async
– fifo_expire_sync
– quantum
– queued
– slice_async
– slice_async_rq
– slice_idle
-- slice_sync -- iostats -- max_hw_sectors_kb -- max_sectors_kb -- nr_requests -- read_ahead_kb – scheduler
– range
– removable
– size
– slaves
– stat
– subsystem → …/…/block
-- uevent -- hda -- dev -- device -> ../../devices/pci0000:00/0000:00:07.1/ide0/0.0 -- holders -- queue -- iosched -- back_seek_max -- back_seek_penalty -- fifo_expire_async -- fifo_expire_sync -- quantum -- queued -- slice_async -- slice_async_rq -- slice_idle – slice_sync
– iostats
– max_hw_sectors_kb
– max_sectors_kb
– nr_requests
– read_ahead_kb
-- scheduler -- range -- removable -- size -- slaves -- stat -- subsystem -> ../../block – uevent
– md0
– dev
– holders
– md
– array_state
– chunk_size
– component_size
– layout
– level
– metadata_version
– new_dev
– raid_disks
– resync_start
-- safe_mode_delay -- range -- removable -- size -- slaves -- stat -- subsystem -> ../../block – uevent
– ram0
– dev
– holders
– range
– removable
– size
– slaves
– stat
– subsystem → …/…/block
-- uevent -- ram1 -- dev -- holders -- range -- removable -- size -- slaves -- stat -- subsystem -> ../../block – uevent
– ram10
– dev
– holders
– range
– removable
– size
– slaves
– stat
– subsystem → …/…/block
-- uevent -- ram11 -- dev -- holders -- range -- removable -- size -- slaves -- stat -- subsystem -> ../../block – uevent
– ram12
– dev
– holders
– range
– removable
– size
– slaves
– stat
– subsystem → …/…/block
-- uevent -- ram13 -- dev -- holders -- range -- removable -- size -- slaves -- stat -- subsystem -> ../../block – uevent
– ram14
– dev
– holders
– range
– removable
– size
– slaves
– stat
– subsystem → …/…/block
-- uevent -- ram15 -- dev -- holders -- range -- removable -- size -- slaves -- stat -- subsystem -> ../../block – uevent
– ram2
– dev
– holders
– range
– removable
– size
– slaves
– stat
– subsystem → …/…/block
-- uevent -- ram3 -- dev -- holders -- range -- removable -- size -- slaves -- stat -- subsystem -> ../../block – uevent
– ram4
– dev
– holders
– range
– removable
– size
– slaves
– stat
– subsystem → …/…/block
-- uevent -- ram5 -- dev -- holders -- range -- removable -- size -- slaves -- stat -- subsystem -> ../../block – uevent
– ram6
– dev
– holders
– range
– removable
– size
– slaves
– stat
– subsystem → …/…/block
-- uevent -- ram7 -- dev -- holders -- range -- removable -- size -- slaves -- stat -- subsystem -> ../../block – uevent
– ram8
– dev
– holders
– range
– removable
– size
– slaves
– stat
– subsystem → …/…/block
-- uevent -- ram9 -- dev -- holders -- range -- removable -- size -- slaves -- stat -- subsystem -> ../../block – uevent
– sda
– dev
– device → …/…/devices/pci0000:00/0000:00:10.0/host0/target0:0:0/0:0:0:0
– holders
– queue
– iosched
– back_seek_max
– back_seek_penalty
– fifo_expire_async
– fifo_expire_sync
– quantum
– queued
– slice_async
– slice_async_rq
– slice_idle
-- slice_sync -- iostats -- max_hw_sectors_kb -- max_sectors_kb -- nr_requests -- read_ahead_kb – scheduler
– range
– removable
– sda1
– dev
– holders
– size
– start
– stat
– subsystem → …/…/…/block
-- uevent -- sda2 -- dev -- holders -- dm-0 -> ../../../../block/dm-0 – dm-1 → …/…/…/…/block/dm-1
– size
– start
– stat
– subsystem → …/…/…/block
-- uevent -- size -- slaves -- stat -- subsystem -> ../../block – uevent
-- sdb -- dev -- device -> ../../devices/pci0000:00/0000:00:10.0/host0/target0:0:1/0:0:1:0 -- holders – dm-2 → …/…/…/block/dm-2
– queue
– iosched
– back_seek_max
– back_seek_penalty
– fifo_expire_async
– fifo_expire_sync
– quantum
– queued
– slice_async
– slice_async_rq
– slice_idle
-- slice_sync -- iostats -- max_hw_sectors_kb -- max_sectors_kb -- nr_requests -- read_ahead_kb – scheduler
– range
– removable
– size
– slaves
– stat
– subsystem → …/…/block
`-- uevent

121 directories, 228 files

Thanks,
Mark

Mark Maas wrote:

We have the same issue on all of our redhat 5.x machines at this moment:

mmaas@xmgtansible:~/playbooks$ ansible -m setup aer2
sudo password:
aer2 | FAILED => failed to parse:
  File
"/home/mmaas/.ansible/tmp/ansible-1360656501.85-61557686672417/setup",
line
448
    d['sectors'] = d['sectors'] if d['sectors'] else 0
                                 ^
SyntaxError: invalid syntax

This is fixed on devel as of 10 hours ago.

Daniel

Yep, it's python 2.6 syntax creeping into the module.

It's good this is changed because I do not like the one line ifs anyway :slight_smile:

Sorry, missed that one, I thought I had removed all if else one liners.

The reason you missed it, is the reason why it is bad :slight_smile: