add a tag while creating an EBS via ec2_vol

Hi,

I would like to add some tags while creating new volumes using ec2_vol module.

I saw that this feature has been implemented and merged in github issues. But I couldn’t find the documentation for it.

I would be grateful if anybody could point me to some documentation.

Thanks,
Chinmaya

I’m actually not seeing any tag support in the ec2_vol module.

Perhaps link me to the github ticket you saw?

Tagging of volumes is supported through the ec2_tag module.
http://www.ansibleworks.com/docs/modules.html#ec2-tag

It would be nice to have both happen simultaneously but it’s definitely feasible if you register the result of running ec2_vol.

Will

I saw the tag module… But when I use it with ec2_vol.I am getting error…
My sample playbook is

I haven’t used it for a while and don’t have my working copy any more…

Can you run it with -v so that we can see the results of ec2_vol (you may need to strip out your secret keys from the results - I recommend using environment variables for the secrets where you can). I suspect it’s something to do with ec2_volumes.results.

You may get some more information using
action: debug msg={{ec2_volumes}}
if you’re using v1.4

Will

When I run the playbook with -v and debug option, I get the following output.

PLAY [Create a volume] ********************************************************

GATHERING FACTS ***************************************************************
ok: [localhost]

TASK: [Create an ebs volume] **************************************************
ok: [localhost] => {“device”: null, “volume_id”: “vol-XXXXXX”}

TASK: [Print out the volume ids] **********************************************
ok: [localhost] => {“msg”: “{udevice:”}

TASK: [Tag the volume] ********************************************************
fatal: [localhost] => One or more undefined variables: ‘str object’ has no attribute ‘volume_id’

FATAL: all hosts have already failed – aborting

PLAY RECAP ********************************************************************
to retry, use: --limit @/root/cv.retry

localhost : ok=3 changed=0 unreachable=1 failed=0

Thanks,

Comparing how ec2 and ec2_vol process their results, I suspect this is a bug in ec2_vol - it’s just printing the json results, not running module.exit_json with the results. I was sure I had this working although I went over to another approach (creating the volume at instance creation time and tagging it then, which I’m still hoping to get accepted at some point - https://github.com/ansible/ansible/pull/4534)

If you raise this as a bug on github I can look at fixing how this works - hopefully someone will accept it relatively quickly.

Will

Sorry about that, we’re still actively working through a pile of pull requests.

A seperate fix for a print issue could be more quickly merged, but we’ll get there.

Dented a ginormous chunk this release thanks to bringing the community team up by an extra developer!

Ok, this was a red herring, apologies. There is nothing actually wrong with using print json.dumps rather than module.exit_json (I tested both versions)

You need to use
debug msg=“{{ec2_volumes}}”

(note the quotes around the variable)

Perhaps the problem is the use of the with_items with a single result (obviously with_items would work with a one element array, but may well not work with a single result)

I would try the following (note that I rename the register variable to reflect that it is a single result):

  • name: Create an ebs volume
    local_action: ec2_vol volume_size=100 aws_access_key={{aa_key}} aws_secret_key={{as_key}} zone=us-east-1a region=us-east-1
    register: ec2_volume

  • name: Tag the volume
    local_action: ec2_tag resource={{ec2_volume.volume_id}} region=us-east-1 state=present aws_access_key={{aa_key}} aws_secret_key={{as_key}}
    args:
    tags:
    name: test

When in doubt, using 1.4:

  • debug: var=what_variable_I_registered

will tell you the exact structure

Thanks…That solved the issue.