sample playbook demonstating openstack usage?

I’m looking for some sample playbooks using most of OpenStack modules. I’ve attempted building my own but found that certain things are unclear to me (like how do I get image id after it’s creation to be used with nova_compute invocation later, etc.)

Quantum module seems to be the most straight-forward as it uses “names” vs “ids” thus making it easy to reference created resources. However most other modules do not so I am looking for some implementation patterns/best practices.

As a side-note - I can’t make glance operate properly even though identical CLI invocation works as expected. I’m assuming I’m missing some piece:

glance_image: login_username={{ keystone_admin_username }}
login_password={{ keystone_admin_password }}
login_tenant_name={{ tenant }}
region_name={{ region }}
auth_url={{ auth_url }}
name=cirros
container_format=bare
disk_format=qcow2
state=present
file=/tmp/images/cirros-0.3.2-x86_64-disk.img

which results in error:

failed: [192.168.0.143] => {“failed”: true}

msg: Error in fetching image list:

even though using .rc file from template:

export OS_USERNAME=“{{ keystone_admin_username }}”
export OS_PASSWORD=“{{ keystone_admin_password }}”
export OS_TENANT_NAME=“{{ tenant }}”
export OS_AUTH_URL=“http://{{ keystone_service_public_ip }}:5000/v2.0/”

export OS_REGION_NAME=“{{ region }}”

works just fine…

"I’m looking for some sample playbooks using most of OpenStack modules. I’ve attempted building my own but found that certain things are unclear to me (like how do I get image id after it’s creation to be used with nova_compute invocation later, etc.) "

I’d hope these would be easy to browse with Horizon or the CLI.

“However most other modules do not so I am looking for some implementation patterns/best practices.”

Depends on what you want to do, I think. Some references worth reading on GitHub.

As for glance, is that all you have in that error? It’s weird that it ends in a colon with nothing following.

"I’m looking for some sample playbooks using most of OpenStack modules. I’ve attempted building my own but found that certain things are unclear to me (like how do I get image id after it’s creation to be used with nova_compute invocation later, etc.) "

I’d hope these would be easy to browse with Horizon or the CLI.

yes, but it doesn’t go well with full automation :wink:

“However most other modules do not so I am looking for some implementation patterns/best practices.”

Depends on what you want to do, I think. Some references worth reading on GitHub.

say, something along the lines: define network in neutron, add VM to nova, attach some storage, populate VM with packages, do the data transfer and configure all remaining pieces inside VM.

As for glance, is that all you have in that error? It’s weird that it ends in a colon with nothing following.

yep, that’s all I get. I even configured it to log playbook execution with the same result:

2014-06-25 13:27:19,975 p=7165 u=dimon | failed: [192.168.0.143] => {“failed”: true}
2014-06-25 13:27:19,976 p=7165 u=dimon | msg: Error in fetching image list:
2014-06-25 13:27:19,976 p=7165 u=dimon | FATAL: all hosts have already failed – aborting

say, something along the lines: define network in neutron,

FWIW, the last time I messed with this, neutron wasn’t even close to being ready for anything like production use. Networking in openstack is pretty flaky in general, and neutron is the future…but the future isn’t here yet.

If you already have that working and happy, then ignore that, of course. It’s just that it’s been a major source of pain for my team.

add VM to nova, attach some storage, populate VM with packages, do the data transfer and configure all remaining pieces inside VM.

When I did something exactly along these lines, I wound up using two sets of playbooks. One set did the openstack-level VM manipulations as one user, the other did the actual data transfer/package installation stuff as a different user. Then I wove the calls to the playbooks together using shell scripts.

It’s a n00b approach, but it worked.

As for glance, is that all you have in that error? It’s weird that it ends in a colon with nothing following.

yep, that’s all I get. I even configured it to log playbook execution with the same result:

2014-06-25 13:27:19,975 p=7165 u=dimon | failed: [192.168.0.143] => {“failed”: true}
2014-06-25 13:27:19,976 p=7165 u=dimon | msg: Error in fetching image list:
2014-06-25 13:27:19,976 p=7165 u=dimon | FATAL: all hosts have already failed – aborting

I had some weird issues with the core ansible openstack modules when I was doing this. I added some error handling and logging, and those issues went away. This was back when 1.4 was cutting edge.I’m more inclined to blame open stack than ansible, but it could have been a bad network cable for all I know. We wound up taking a different approach, so I didn’t get any further than that.

I know it’s not much, but I hope it helps,
James

say, something along the lines: define network in neutron,

FWIW, the last time I messed with this, neutron wasn't even close to being
ready for anything like production use. Networking in openstack is pretty
flaky in general, and neutron is the future...but the future isn't here yet.

It exists in places, but people running OpenStack in production usually
have disproportionally large teams wrangling it. IMHO it's not cost
effective unless you're a big shop that has time to figure out what works
and what doesn't (and even then it may vary), but that's my personal
non-official opinion. Rackspace does a very good job of making it seem
friendly by paying all the people to run it for you.

Even though it doesn't work for everyone, there's a reason way more users
here are using public cloud, I will say that :slight_smile:

I had some weird issues with the core ansible openstack modules when I was
doing this. I added some error handling and logging, and those issues went
away. This was back when 1.4 was cutting edge.I'm more inclined to blame
open stack than ansible, but it could have been a bad network cable for all
I know. We wound up taking a different approach, so I didn't get any
further than that.

If you see any such problems in 1.6.X, please do make sure we have tickets
filed. Can't make sure things get upstream for other users who try them,
otherwise.

(http://github.com/ansible/ansible <- ticket URL)

Thanks!