Ansible vmware deploy linux vm - adapter not connected

Hello i have an issue with ansible i am using awx to execute playbook using
the execution environment: Quay the vm is create but network is not connected i need to go in to VMware click the connected button.

And then everything works i tried adding connected: true and start_connected: true but that seems not to do anything.

I have checked i have open-vm-tools running in the template that is cloned. (This does also report on the deployed vm as Running).

- name: Clone the template
  community.vmware.vmware_guest:
      hostname: "{{ vcenter_hostname }}"
      username: "{{ vcenter_username }}"
      password: "{{ vcenter_password }}"
      validate_certs: false
      cluster: "{{ vcenter_vm_cluster }}"
      folder: "{{ vcenter_vm_folder }}"
      datacenter: "{{ vcenter_datacenter }}"
      datastore: "{{ vcenter_datastore }}"
      name: "{{ vm_name | upper  }}"
      state: powered-on
      template: "{{ vm_template }}"
      wait_for_ip_address: false
      annotation: "{{ comments | default ('Deployed by ansible') }} Buisness Contact: {{ email_b_contact }}"
      networks:
        - name: "{{ network_name }}"
          ip: "{{ network_ip.msg }}"
          netmask: "{{ network_netmask }}"
          gateway: "{{ network_gateway }}"
          device_type: "{{ vm_device_type }}"
          type: static
      hardware:
        memory_mb: "{{ memory_mb|int * 1024 }}"
        num_cpus: "{{ num_cpus }}"
        hotadd_cpu: false
        hotadd_memory: false
      customization:
        domain: "{{ vm_domain }}"
        dns_servers: "{{ dns_servers.split(',') }}"
      guest_id: "{{ guest_id }}"
  register: deploy_vm

Could this be relevant to the execution environment ?

Per https://docs.ansible.com/ansible/latest/collections/community/vmware/vmware_guest_module.html#parameter-networks it looks like you’d need to add connected: true and/or start_connected: true within the list item of each NIC under networks, ie:

- name: Clone the template
  community.vmware.vmware_guest:
      hostname: "{{ vcenter_hostname }}"
      username: "{{ vcenter_username }}"
      password: "{{ vcenter_password }}"
      validate_certs: false
      cluster: "{{ vcenter_vm_cluster }}"
      folder: "{{ vcenter_vm_folder }}"
      datacenter: "{{ vcenter_datacenter }}"
      datastore: "{{ vcenter_datastore }}"
      name: "{{ vm_name | upper  }}"
      state: powered-on
      template: "{{ vm_template }}"
      wait_for_ip_address: false
      annotation: "{{ comments | default ('Deployed by ansible') }} Buisness Contact: {{ email_b_contact }}"
      networks:
        - name: "{{ network_name }}"
          ip: "{{ network_ip.msg }}"
          netmask: "{{ network_netmask }}"
          gateway: "{{ network_gateway }}"
          device_type: "{{ vm_device_type }}"
          type: static
          connected: true
          start_connected: true
      hardware:
        memory_mb: "{{ memory_mb|int * 1024 }}"
        num_cpus: "{{ num_cpus }}"
        hotadd_cpu: false
        hotadd_memory: false
      customization:
        domain: "{{ vm_domain }}"
        dns_servers: "{{ dns_servers.split(',') }}"
      guest_id: "{{ guest_id }}"
  register: deploy_vm

I’m unclear by your original post since you mention you’ve tried adding “connected: true” but don’t specify where you added it, and neither option appear within your paste.

Hello,

i have pasted connected:true and start_connected: true the same way you have done. But this did not solve my issue.

Let me test this out again

You may also want to take a look at this thread: vmware_guest - after cloning nic is not connected to network · Issue #45834 · ansible/ansible · GitHub seems similar to what you’re experiencing

Simular but no fix , i fixed it by adding a python script after deployment to connect the interface based on {{ network_name }} that connects the interface.

Not the cleanest solution but works for now

We add a second task specifically to ensure the network is connected.

We use DHCP on our server VLANs. We use permanent leases in our DNS service. Once the machine is created we send over the mac address to the DNS service. That service creates the DHCP record.

The machine powers up and waits for a response from DHCP which then configures its network, gateway, netmask, etc.

##
## create a VM from template, powered off state, then add disks and tags
##
- name: create the guest vm using template
  community.vmware.vmware_guest:
    validate_certs: no
    hostname: "{{ vcenter[location|lower].vc }}"
    datacenter: "{{ vcenter[location|lower].dc }}"
    cluster: "{{ vcenter[location|lower].cl }}"
    name: "{{ vm_guest_name | lower }}"
    state: poweredoff
    template: "{{ os_type }}"
    folder:  "{{ esx_folder }}"
    datastore: "{{ vcenter[location|lower].ds }}"
    advanced_settings:
      - key: "disk.EnableUUID"
        value: "true"
    hardware:
      hotadd_cpu: yes
      hotadd_memory: yes
      memory_mb: "{{ vm_spec[vm_size].ram }}"
      num_cpus:  "{{ vm_spec[vm_size].cpu }}"
    networks:
      - name: "VLAN_{{ vlan }}"
        type: dhcp
        start_connected: yes
        connected: yes
    wait_for_ip_address: no
  delegate_to: localhost
  register: newvm

##
## ensure the network connects on startup
##
- name: set the vm network to connect at startup
  community.vmware.vmware_guest_network:
    validate_certs: no
    hostname: "{{ vcenter[location|lower].vc }}"
    datacenter: "{{ vcenter[location|lower].dc }}"
    cluster: "{{ vcenter[location|lower].cl }}"
    name: "{{ vm_guest_name | lower }}"
    mac_address: "{{ newvm.instance.hw_eth0.macaddress }}"
    network_name: "VLAN_{{ vlan }}"
    start_connected: yes
    connected: yes
1 Like

Hy walter i will give this method a try see what result i will get

@Walter_Rowe tested your approuch and this is indeed a more cleaner solution i only added a additional step to boot the vm when network configuration is executed.

nice work

1 Like

Glad that helped. I should have added that there are more tasks in our esx-create-vm.yml playbook that do things with tagging and adding storage. At the end we do power on the machine.

We have a vars file that provides server specifications. You can see refs to vm_spec[vm_size].ram and .cpu … we get vm_size from our ServiceNow request form and use it as a dictionary key. We model the keys after Amazon machine sizes so we can use the same key across VMware and AWS.

vm_spec:
  t2_micro:    { cpu: 1, ram: 1024  }
  t2_small:    { cpu: 1, ram: 2048  }
  t2_medium:   { cpu: 2, ram: 4096  }
  t2_large:    { cpu: 2, ram: 8192  }
  r5_large:    { cpu: 2, ram: 16384 }
  t2_xlarge:   { cpu: 4, ram: 16384 }
  m4_xlarge:   { cpu: 4, ram: 16384 }
  t2_2xlarge:  { cpu: 8, ram: 32768 }
  m4_2xlarge:  { cpu: 8, ram: 32768 }
  r5a_large:   { cpu: 2, ram: 16384 }
  r5a_xlarge:  { cpu: 4, ram: 32768 }
  r5a_2xlarge: { cpu: 8, ram: 65536 }
  r6i_large:   { cpu: 2, ram: 16384 }
  r6i_xlarge:  { cpu: 4, ram: 32768 }
  r6i_2xlarge: { cpu: 8, ram: 65536 }

We employ a similar method for server types - MSSQL server, nginx server, etc, where we provide disks. For that we use a list of JSON records and json_query to filter for the server type.

fs_spec: [

  # linux apache web server servers
  { profile: apache         , name: system    , device: a, size:   75, type: { esx: thin, aws: { ebs_vol_type: gp3, throughput: default, iops: 3000 },  azr: azr_default, gcp: gcp_default }, ctrl_type: paravirtual, ctrl: 0, unit: 0, owner: root,    group: root, perms: 0755 }, # /
  { profile: apache         , name: sites     , device: b, size:   32, type: { esx: thin, aws: { ebs_vol_type: gp3, throughput: default, iops: 3000 },  azr: azr_default, gcp: gcp_default }, ctrl_type: paravirtual, ctrl: 1, unit: 0, owner: root,    group: root, perms: 0755 }, # /sites

  # linux commvault media servers
  { profile: commvault      , name: system    , device: a, size:   75, type: { esx: thin, aws: { ebs_vol_type: gp3, throughput: default, iops: 3000 },  azr: azr_default, gcp: gcp_default }, ctrl_type: paravirtual, ctrl: 0, unit: 0, owner: root,    group: root, perms: 0755 }, # /
  { profile: commvault      , name: ddb       , device: b, size:  512, type: { esx: thin, aws: { ebs_vol_type: gp3, throughput: default, iops: 3000 },  azr: azr_default, gcp: gcp_default }, ctrl_type: paravirtual, ctrl: 1, unit: 0, owner: root,    group: root, perms: 0755 }, # /ddb
  { profile: commvault      , name: index     , device: c, size:  512, type: { esx: thin, aws: { ebs_vol_type: gp3, throughput: default, iops: 3000 },  azr: azr_default, gcp: gcp_default }, ctrl_type: paravirtual, ctrl: 1, unit: 1, owner: root,    group: root, perms: 0755 }, # /index

  # linux docker container servers
  { profile: docker         , name: system     , device: a, size:  75, type: { esx: thin, aws: { ebs_vol_type: gp3, throughput: default, iops: 3000 },  azr: azr_default, gcp: gcp_default }, ctrl_type: paravirtual, ctrl: 0, unit: 0, owner: root,    group: root, perms: 0755 }, # /
  { profile: docker         , name: containers , device: b, size: 100, type: { esx: thin, aws: { ebs_vol_type: gp3, throughput: default, iops: 3000 },  azr: azr_default, gcp: gcp_default }, ctrl_type: paravirtual, ctrl: 1, unit: 0, owner: root,    group: root, perms: 0755 }, # /containers

  # linux general purpose servers
  { profile: general        , name: system    , device: a, size:   75, type: { esx: thin, aws: { ebs_vol_type: gp3, throughput: default, iops: 3000 },  azr: azr_default, gcp: gcp_default }, ctrl_type: paravirtual, ctrl: 0, unit: 0, owner: root,    group: root, perms: 0755 }, # /

  # linux MySQL database servers
  { profile: mysql          , name: system    , device: a, size:   75, type: { esx: thin, aws: { ebs_vol_type: gp3, throughput: default, iops: 3000 },  azr: azr_default, gcp: gcp_default }, ctrl_type: paravirtual, ctrl: 0, unit: 0, owner: root,    group: root, perms: 0755 }, # /
  { profile: mysql          , name: tmp       , device: b, size:   64, type: { esx: thin, aws: { ebs_vol_type: gp3, throughput: default, iops: 3000 },  azr: azr_default, gcp: gcp_default }, ctrl_type: paravirtual, ctrl: 0, unit: 1, owner: root,    group: root, perms: 1777 }, # /tmp
  { profile: mysql          , name: apps      , device: c, size:  128, type: { esx: thin, aws: { ebs_vol_type: gp3, throughput: default, iops: 3000 },  azr: azr_default, gcp: gcp_default }, ctrl_type: paravirtual, ctrl: 1, unit: 0, owner: oranist, group: dba,  perms: 0755 }, # /apps
  { profile: mysql          , name: archive   , device: d, size:  192, type: { esx: thin, aws: { ebs_vol_type: gp3, throughput: default, iops: 3000 },  azr: azr_default, gcp: gcp_default }, ctrl_type: paravirtual, ctrl: 1, unit: 1, owner: oranist, group: dba,  perms: 0755 }, # /archive
  { profile: mysql          , name: mydata    , device: e, size:  208, type: { esx: thin, aws: { ebs_vol_type: gp3, throughput: default, iops: 3000 },  azr: azr_default, gcp: gcp_default }, ctrl_type: paravirtual, ctrl: 2, unit: 0, owner: mysql,   group: dba,  perms: 0755 }, # /mydata
  { profile: mysql          , name: mybackups , device: f, size:  224, type: { esx: thin, aws: { ebs_vol_type: gp3, throughput: default, iops: 3000 },  azr: azr_default, gcp: gcp_default }, ctrl_type: paravirtual, ctrl: 2, unit: 1, owner: mysql,   group: dba,  perms: 0755 }, # /mybackups

  # linux nginx web server servers
  { profile: nginx          , name: system    , device: a, size:   75, type: { esx: thin, aws: { ebs_vol_type: gp3, throughput: default, iops: 3000 },  azr: azr_default, gcp: gcp_default }, ctrl_type: paravirtual, ctrl: 0, unit: 0, owner: root,    group: root, perms: 0755 }, # /
  { profile: nginx          , name: sites     , device: b, size:   32, type: { esx: thin, aws: { ebs_vol_type: gp3, throughput: default, iops: 3000 },  azr: azr_default, gcp: gcp_default }, ctrl_type: paravirtual, ctrl: 1, unit: 0, owner: root,    group: root, perms: 0755 } # /sites
]

fs_spec_windows: [

  # Windows commvault media servers
  { profile: commvault , disk_number: 0 , name: system , device: c, size:  100, type: { esx: thin, aws: { ebs_vol_type: gp3, throughput: default, iops: 3000 },  azr: azr_default, gcp: gcp_default }, ctrl_type: lsilogicsas, ctrl: 0, unit: 0, block: 4096 }, # OS/Sys
  { profile: commvault , disk_number: 1 , name: Index  , device: e, size:  100, type: { esx: thin, aws: { ebs_vol_type: gp3, throughput: default, iops: 3000 },  azr: azr_default, gcp: gcp_default }, ctrl_type: lsilogicsas, ctrl: 1, unit: 0, block: 4096 }, # Index
  { profile: commvault , disk_number: 2 , name: DDB    , device: f, size:  100, type: { esx: thin, aws: { ebs_vol_type: gp3, throughput: default, iops: 3000 },  azr: azr_default, gcp: gcp_default }, ctrl_type: lsilogicsas, ctrl: 2, unit: 0, block: 4096 }, # DDB

  # Windows general purpose servers
  { profile: general   , disk_number: 0 , name: system , device: c, size:  100, type: { esx: thin, aws: { ebs_vol_type: gp3, throughput: default, iops: 3000 },  azr: azr_default, gcp: gcp_default }, ctrl_type: lsilogicsas, ctrl: 0, unit: 0, block: 4096 }, # OS/Sys
  { profile: general   , disk_number: 1 , name: Data1  , device: e, size:  100, type: { esx: thin, aws: { ebs_vol_type: gp3, throughput: default, iops: 3000 },  azr: azr_default, gcp: gcp_default }, ctrl_type: lsilogicsas, ctrl: 1, unit: 0, block: 4096 }, # Data1

  # Windows IIS web servers
  { profile: iis       , disk_number: 0 , name: system , device: c, size:  100, type: { esx: thin, aws: { ebs_vol_type: gp3, throughput: default, iops: 3000 },  azr: azr_default, gcp: gcp_default }, ctrl_type: lsilogicsas, ctrl: 0, unit: 0, block: 4096 }, # OS/Sys
  { profile: iis       , disk_number: 1 , name: Data1  , device: e, size:  300, type: { esx: thin, aws: { ebs_vol_type: gp3, throughput: default, iops: 3000 },  azr: azr_default, gcp: gcp_default }, ctrl_type: lsilogicsas, ctrl: 1, unit: 0, block: 4096 }, # Data1

  # Windows MS SQL servers
  { profile: mssql     , disk_number: 0 , name: system , device: c, size: 100, type: { esx: thin, aws: { ebs_vol_type: gp3, throughput: default, iops: 3000 },  azr: azr_default, gcp: gcp_default }, ctrl_type: lsilogicsas, ctrl: 0, unit: 0, block: 4096  }, # OS/Sys
  { profile: mssql     , disk_number: 1 , name: Data1  , device: e, size: 500, type: { esx: thin, aws: { ebs_vol_type: gp3, throughput: 250, iops: 3000 },      azr: azr_default, gcp: gcp_default }, ctrl_type: lsilogicsas, ctrl: 0, unit: 1, block: 65536 }, # Data1
  { profile: mssql     , disk_number: 2 , name: Data2  , device: f, size: 500, type: { esx: thin, aws: { ebs_vol_type: gp3, throughput: 250, iops: 3000 },      azr: azr_default, gcp: gcp_default }, ctrl_type: lsilogicsas, ctrl: 0, unit: 2, block: 65536 }, # Data2
  { profile: mssql     , disk_number: 3 , name: Backup , device: g, size: 800, type: { esx: thin, aws: { ebs_vol_type: gp3, throughput: 250, iops: 3000 },      azr: azr_default, gcp: gcp_default }, ctrl_type: lsilogicsas, ctrl: 1, unit: 0, block: 4096  }, # Backup
  { profile: mssql     , disk_number: 4 , name: Index  , device: i, size:   5, type: { esx: thin, aws: { ebs_vol_type: gp3, throughput: default, iops: 3000 },  azr: azr_default, gcp: gcp_default }, ctrl_type: lsilogicsas, ctrl: 1, unit: 1, block: 65536 }, # Index
  { profile: mssql     , disk_number: 5 , name: Log    , device: l, size: 200, type: { esx: thin, aws: { ebs_vol_type: io2 , throughput: default, iops: 500 },  azr: azr_default, gcp: gcp_default }, ctrl_type: lsilogicsas, ctrl: 2, unit: 0, block: 65536 }, # Log
  { profile: mssql     , disk_number: 6 , name: SQL    , device: p, size:  75, type: { esx: thin, aws: { ebs_vol_type: gp3, throughput: default, iops: 3000 },  azr: azr_default, gcp: gcp_default }, ctrl_type: lsilogicsas, ctrl: 2, unit: 1, block: 4096  }, # SQL Program
  { profile: mssql     , disk_number: 7 , name: TempDB , device: t, size:  50, type: { esx: thin, aws: { ebs_vol_type: io2, throughput: default, iops: 500 },  azr: azr_default, gcp: gcp_default }, ctrl_type: lsilogicsas, ctrl: 3, unit: 0, block: 65536 }, # TempDB
]

In our ESX server creation playbook we query these lists for the specific server type. Note how the type_list value chooses the proper Windows or Linux list to query. We set “my_family” higher in the playbook based on info we get from our form in ServiceNow.

##
## build required disk lists based on fs_type (excludes system disk provided by ESX template)
##
- name: Initialize disk lists
  set_fact:
    type_list: "{{ (fs_spec if my_family != 'windows' else fs_spec_windows) | json_query('[?profile==`'+fs_type+'` && name!=`system`]') }}"
    disk_list: [ ]

- name: Build vmware_guest_disk list from filesystem list
  set_fact:
    disk_list: "{{ disk_list + this_list }}"
  loop: "{{ type_list }}"
  vars:
    this_list: [ { size_gb: "{{ item.size }}", type: "{{ item['type'][location[0:3]] }}", datastore: "{{ vcenter[location|lower].ds }}", scsi_type: "{{ item.ctrl_type }}", scsi_controller: "{{ item.ctrl }}", unit_number: "{{ item.unit }}" } ]

##
## attach additional disks (if any) to template just provisioned
##
- name: Add disk(s) to {{ vm_guest_name | lower }}
  community.vmware.vmware_guest_disk:
    datacenter: "{{ vcenter[location|lower].dc }}"
    hostname: "{{ vcenter[location|lower].vc }}"
    name: "{{ vm_guest_name | lower }}"
    validate_certs: no
    disk: "{{ disk_list }}"
#    disk: "{{ snow_disks }}"
  when: (disk_list|length) > 0

As you can see we filter the fs_spec or fs_spec_windows (fs = filesystem) lists based on fs_type and build a VMware formatted disk specification so we can add the storage needed based on file system (server purpose) type.

We chose to be very detailed in our disk specifications to provide predictability about which volume is assigned to which disk. This was forward thinking that hopefully enables automation of requests to increase volume sizes as needed.

All of this happens before the machine is powered on the first time so that the OS will do its initial probe on first boot and find all the disks needed by that type of system. They all start with the same vanilla machine template based on OS. A later playbook in our workflow does all the “customization” to make it into the type the user selected in the ServiceNow form. This includes formatting volumes and setting their mount points, installing the needed packages, making system resource settings, installing configuration files, updating host-based firewall, etc.