CitrusLeaf drive Over-provisioning

Hi,
I'm running a NoSQL database called CitrusLeaf. Our environment is
growing quickly, and we have drives fail. So its important to me to
Ansible CitrusLeaf deployment. Today I used my first variable, which
sets the internal IP address of the cluster in the CitrusLeaf
configuration file. This for me is powerful and simple. Awesome!

Moving forward though, I have gotten one support person at CitrusLeaf
interested in Ansible. He has access to our servers and others
customers servers. We both use cssh a lot but I feel its prone to
users mistakes. So he has offered (if needed) to write a module for
drive provisioning. Should we do this in a playbook or do we need a
module?

Here are the steps..

Download new version of hdparm and compile with make
Then run:
hdparm -I /dev/sdb
hdparm -I /dev/sdc
hdparm -I /dev/sdd
hdparm -I /dev/sde
hdparm -I /dev/sdf

This command returns a long list of information about a drive, the key
data is that it returns "not frozen" These drive letters change based
on how our datacenter has configured the servers.

Then we need to do

hdparm --user-master u --security-set-pass test /dev/sdb (on each drive)
and
hdparm --user-master u --security-erase test /dev/sdb (on each drive)

then finally

wipe all the drives:
dd if=/dev/zero of=/dev/sdb bs=128k (on each drive) I think this
command would return a non zero in a bash script

So I want to be able to do all this when provisioning new servers, and
per disk when one disk fails.

Finally it would be really cool to get others using Ansible through
this playbook or module.

So what do you think, is this a module or a playbook?

James

Here are the steps…

Download new version of hdparm and compile with make
Then run:
hdparm -I /dev/sdb
hdparm -I /dev/sdc
hdparm -I /dev/sdd
hdparm -I /dev/sde
hdparm -I /dev/sdf

This command returns a long list of information about a drive, the key
data is that it returns “not frozen” These drive letters change based
on how our datacenter has configured the servers.

Then we need to do

hdparm --user-master u --security-set-pass test /dev/sdb (on each drive)
and
hdparm --user-master u --security-erase test /dev/sdb (on each drive)

then finally

wipe all the drives:
dd if=/dev/zero of=/dev/sdb bs=128k (on each drive) I think this
command would return a non zero in a bash script

So I want to be able to do all this when provisioning new servers, and
per disk when one disk fails.

Finally it would be really cool to get others using Ansible through
this playbook or module.

So what do you think, is this a module or a playbook?

Not sure I completely understand the use case, but if you need to decide how to wipe the drives based on the output of hdparm (see how many disks you have), a module is going the best way to do it now. Though I would be uncomfortable with a configuration management program deciding to erase drives whenever it detects bad hdparm status.

I’d much prefer to see you write an API-using script that uses something like a “find_bad_drives” module and generates a report, and a human decides to run the erasure
call passing in what drives you need – very carefully – to one host at a time. I did something very similar like this with func, as it had a “smart” module and also an “exploding laptop battery finder”
script.

0.7 should also have a feature where the result of the last command in each host is available in a variable.

–Michael