I have created a playbook to spin up three VMs from a vSphere template and add configurations.
The VM creation portion looks like this:
`
- name: Clone {{ webtier_vm_template }} Template as Web1 & Apply Network Configuration
vmware_guest:
hostname: “{{ vcenter_ip }}”
username: “{{ vcenter_username }}”
password: “{{ vcenter_password }}”
validate_certs: False
name: “{{ web1_hostname }}”
template: “{{ webtier_vm_template }}”
datacenter: “{{ datacenter_name }}”
folder: /AJLAB Infrastructure
cluster: “{{ cluster_name }}”
networks: - name: “{{ webtier_vmnet }}”
ip: “{{ web1_ip }}”
netmask: “{{ webtier_mask }}”
gateway: “{{ webtier_gateway }}”
type: static
start_connected: True
customization:
domain: ajlab.local
hostname: “{{ web1_hostname }}”
dns_servers: - 172.16.92.100
wait_for_ip_address: yes
state: poweredon
`
I’ve been fighting and fighting to even get the network configuration to work properly. With CentOS 7 I first installed open vmtools and the network changes never take place, the Ansible play hangs forever waiting for the network to come up.
I tried adding “Requires=dbus.service” and “After=dbus.service” to the [Unit] section of vmtoolsd.service and got the same behavior. I noticed if I remove open vmtools and install vmware’s vmtools, the network changes work properly and everything is perfect.
Except I’m having problems with the Python 2 dependency in CentOS 7.
The application I’m trying to run on the VMs is written in Python 3 and I can’t figure out how to get around the need to use SCL (I realized the SCL command creates a subshell which causes Ansible to wait/hang forever).
So I tried CentOS 8 for the VMs. I noticed CentOS 8 comes with open vmtools already installed and I had the same problem getting the network changes to push. I uninstalled open vmtools and installed vmware’s tools, now the network changes do work but after the IP changes take place the VM is not reachable (ping or SSH) at the network layer. However, if I console into the VM and ping out from the VM the network immediately comes up.
As I’m writing this I’m thinking a tcpdump on the VM would be interesting to see because it kinda feels like ARPs either aren’t getting to the VM or the VM is ignoring them, then as soon as the VM pings out the rest of the network learns the MAC and all is well.
So the Ansible playbook actually continues because network is fully configured, but when the next task (dnf packages) kicks off, the playbook bombs out because the VM is inaccessible via SSH.
I thought about embedding a script in the template that pings out for a while at first boot, but decided that was too hokey and shouldn’t be necessary.
Any ideas or recommendations? thanks!