Hi All,
I’m looking for a way to convert a VMware template to a virtual machine using Ansible.
The work flow is:
Convert a template to a virtual machine
Power on
Login to the virtual machine
Update the OS patches (Linux)
Power off
Convert Virtual machine back to template
As you can see I want to keep the patch levels up to date on my templates and do all the above in Ansible.
I’ve searched for ways to do this but haven’t come up with anything.
Any suggestions are most welcome.
Thanks,
Ted
I don’t think you can technically do that from the current list of available modules: http://docs.ansible.com/ansible/latest/modules/list_of_cloud_modules.html
I popped an earlier question about how ESXi transfers IP address settings but no one got back to me. Some of the network settings apply only to Windows-based VMs.
Hello,
You can’t do that with only ansible modules
Here is the solution we have found to do this :
- Convert template to VM : use ansible to execute a powercli script to do that. (even more easy now with Linux powercli)
- Power on the VM with module vsphere_guest (yes, old module but working really fine)
- Get VMfacts with module vsphere_guest (need that to get the IP of the VM - assumption : that the VM has a network cardwith either a fixed IP or in DHCP)
- Put VM info in in-memory inventory (espcially the IP)
- Log to the VM and update it through standard ansible modules (you can do either linux or windows VMs like that)
- Shut down the VM with a powercli script with a Stop-VMGuest cmdlet (and in this script loop until you get the VM state as powered off) – use of vpshere_guest is not recommanded to stop the VM because it’s not doing a proper shutdown at OS level
- Convert back the VM to template with a powercli script
It may be a little complexe but it’s working fine.
Maybe some of this can be done with vmware_guest and vmware_guest_facts, but I don’t like the current version of those modules. I had too many bugs with them.
Best Regards,
Sebastien
Hi Sebastien,
Thanks very much for the comprehensive reply. Would you be able to share any of those scripts / playbooks you mentioned. I’ve no familiarity with powercli and have yet to dabble with VMware modules in ansible.
Anything would be greatly appreciated just to get me started.
Thanks again,
Ted
I do all of this with playbooks and the Ansible vmware_guest module with Ansible 2.4.4 against vCenter currently - though I am not sure the status of the vmware_guest module going forward (there are Github issues in 2.5 that discuss removing/depracating it).
I have the following structure:
`
project
- inventory
… other inventory dirs …
- vms
- group_vars
all.yml
- host_vars
rhel75x64-base.yml
rhel75x64-updated.yml
hosts
- playbooks
… other playbook dirs …
- vcenter
create_snapshots.yml
revert_snapshots.yml
update_template.yml
- roles
… other role dirs …
- vcenter
- tasks
create_snapshot.yml
create_template.yml
create_vm.yml
delete_vm.yml
refresh_vm.yml
revert_snapshot.yml
shutdown_vm.yml
start_vm.yml
- vars
main.yml
`
We have the following set up on a schedule:
playbooks/vcenter/update_template.yml gets run weekly and creates a vm in our vcenter, updates it from our yum repo, and turns it into a template
playbooks/vcenter/create_snapshots.yml gets run weekly (1 hour later than update_template) that destroys a vcenter inventory, creates a new set in vcenter, and creates snapshots so they are ready for CI builds
If you’d like more info, I can thow it in a Github for you to peruse, but will be a day or so.
Hi Nick,
Thanks for the reply. If you could put it up on Github that would be brilliant.
Thanks again,
Ted
Ok, here you go, it’s still in progress, but it’s up.
https://github.com/nickrnet/ansible-vmware
Cheers Nick,
Plenty there to get stuck into there, appreciate it
Ted
Hello Folks,
with ansible 2.5 we can control cpu cores but not cpu virutal number of sockets. Any idea ?
Thanks
Hello,
I will try to put it in a github
It looks to me like “num_cpus” is number of sockets, and “num_cpu_cores_per_socket” can set the cores per socket.
Is that what you’re asking? or am I way off the mark… If you don’t specify “num_cpu_cores_per_socket”, it defaults to 1, if I’m reading the pyvmomi source correctly: https://github.com/vmware/pyvmomi/blob/575ab56eb56f32f53c98f40b9b496c6219c161da/docs/vim/vm/VirtualHardware.rst