How do I create a VM in a cloud and then in the same playbook configure it?

I am trying to create a playbook that will create a new vm based on debian in Azure, and then install some packages on that vm.

My plan for doing this, is target localhost (a Mac) so that I can do the infrastructure stuff (create vnet, create network security groups, create public ipaddr, create the nic, and create the vm) and switch to operating against the vm rather than localhost.

At the top of playbook I have this:

- name: Create vm image for buildpool1
  hosts: localhost
  vars:
    var1: value

and more of the playbook looks like this:

  tasks:
    - name: bunch of setup tasks
      azure.azcollection.azure_rm_* (bunch of different ones)

    - name: Create pubip
      azure.azcollection.azure_rm_publicipaddress:
        name: "{{ name_pubip }}"
        location: "{{ name_loc }}"
        resource_group: "{{ name_rg }}"
        subscription_id: "{{ id_subscription }}"
        tenant: "{{ id_tenant }}"
        allocation_method: Dynamic
      register: output_ip_address

    - name: Create base vm
      azure.azcollection.azure_rm_virtualmachine:
        name: "{{ name_vm }}"
        <more stuff>
        vm_size:  Standard_B2als_v2
        image:
          offer: Debian-12
          publisher: Debian
          sku: 12
          version: latest
      register: azure_vm

    - name: Add new vm to inventory
      ansible.builtin.add_host:
        hostname: '{{ output_ip_address.state.ip_address }}'
        groups: buildpool

    - name: Print what's in hosts
      ansible.builtin.debug:
        var: ansible_play_hosts

    - name: Add Docker GPG apt Key
      when: ansible_facts['os_family'] == "Debian"
      #when: "'buildpool' in group_names"
      ansible.builtin.apt_key:
        url: https://download.docker.com/linux/ubuntu/gpg
        state: present

When I run the playbook, I get output:

TASK [Print what's in hosts] *******************************************************************************************************
ok: [localhost] => {
    "ansible_play_hosts": [
        "localhost",
        "172.214.167.219"
    ]
}

TASK [Add Docker GPG apt Key] ******************************************************************************************************
skipping: [localhost] => {"changed": false, "false_condition": "ansible_facts['os_family'] == \"Debian\"", "skip_reason": "Conditional result was False"}

I had expected to see skipping localhost because of the when for debian, but why does Ansible not even try the other host?

Thanks,

Mike

Because of this:

You are limiting play execution to localhost only… even when you dynamically add real host to the inventory.

The way around this is to run the play against the real host but since the real host does not yet exist, you have to do a little trick.

First you create an inventory.ini with your real host:

[buildpool]
myhost

Notice that there is no IP address specified (no ansible_host variable).

Then you run your play against myhost (or buildpool group) and use delegate_to: localhost for initial tasks that provision your instance/VM like so:

- name: Create vm image for buildpool1
  hosts: myhost # <- not localhost any more
  vars:
    var1: value
...
tasks:
    - name: bunch of setup tasks
      azure.azcollection.azure_rm_* (bunch of different ones)
      delegate_to: localhost # <- added

    - name: Create pubip
      azure.azcollection.azure_rm_publicipaddress:
        name: "{{ name_pubip }}"
        location: "{{ name_loc }}"
        resource_group: "{{ name_rg }}"
        subscription_id: "{{ id_subscription }}"
        tenant: "{{ id_tenant }}"
        allocation_method: Dynamic
      register: output_ip_address
      delegate_to: localhost # <- added

    - name: Create base vm
      azure.azcollection.azure_rm_virtualmachine:
        name: "{{ name_vm }}"
        <more stuff>
        vm_size:  Standard_B2als_v2
        image:
          offer: Debian-12
          publisher: Debian
          sku: 12
          version: latest
      register: azure_vm
      delegate_to: localhost # <- added

    - name: Update ansible_host inventory variable for myhost
      ansible.builtin.set_fact: # not add_host any more
        ansible_host: '{{ output_ip_address.state.ip_address }}'

    - name: Print what's in hosts
      ansible.builtin.debug:
        var: ansible_play_hosts

    - name: Add Docker GPG apt Key
      when: ansible_facts['os_family'] == "Debian"
      #when: "'buildpool' in group_names"
      ansible.builtin.apt_key:
        url: https://download.docker.com/linux/ubuntu/gpg
        state: present

Note that we use set_facts to update inventory with IP info of the host (i.e. ansible_host variable).

add_host cannot modify inventory for the same play it was invoked in but it can modify inventory for subsequent plays. That being said, I’m not sure how ansible_play_hosts lists real host for you but still ignores it during the play.

Thank you so much for the reply Bojan, your explanation really helps.

Apparently though, I’m more of a newbie at ansible than I thought, as I’m unable to make this run at all.

To simplify things for a test (which hits the same failure as the full-blown playbook), I created test.yml playbook that looks like this:

- name: Create vm image for buildpool1
  hosts: buildpool_base
  tasks:
    - name: TimeStamp # noqa: no-changed-when
      delegate_to: localhost
      ansible.builtin.shell:
        echo "TimeStamp:`date +'%Y%m%d-%H%M%S'`"

and inventory.ini has this:

[buildpool]
buildpool_base

and then when I run, I get this:

% ansible-playbook test.yml -i inventory.ini
Using /Users/mpeck/.ansible.cfg as config file

PLAY [Create vm image for buildpool1] **********************************************************************************************

TASK [Gathering Facts] *************************************************************************************************************
fatal: [buildpool_base]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: OpenSSH_9.0p1, LibreSSL 3.3.6\r\ndebug1: Reading configuration data /Users/mpeck/.ssh/config\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 21: include /etc/ssh/ssh_config.d/* matched no files\r\ndebug1: /etc/ssh/ssh_config line 54: Applying options for *\r\ndebug1: Authenticator provider $SSH_SK_PROVIDER did not resolve; disabling\r\ndebug1: auto-mux: Trying existing master\r\ndebug1: Control socket \"/Users/mpeck/.ansible/cp/cbd69ea724\" does not exist\r\nssh: Could not resolve hostname buildpool_base: nodename nor servname provided, or not known", "unreachable": true}

PLAY RECAP *************************************************************************************************************************
buildpool_base             : ok=0    changed=0    unreachable=1    failed=0    skipped=0    rescued=0    ignored=0

So…what am I doing wrong, now?

Ah, yes. One key component is missing. Since real host does not exist in the beginning, Ansible cannot gather facts about the host so we have to disable fact gathering:

- name: Create vm image for buildpool1
  hosts: buildpool_base
  gather_facts: false # <- added

  tasks:
  ...

This is not the end of the story though. Because we disabled fact gathering we have to somehow explicitly tell Ansible to gather them once instance/VM is up and running. We can to that in two different ways:

  1. Use setup module:
- name: Create vm image for buildpool1
  hosts: buildpool_base
  gather_facts: false # <- added

  tasks:
  
  # here goes all the task necessary to provision an instance/VM

  - name: Gathering Facts
    setup:

  # here goes all the tasks after instance/VM is up and running

  1. Split into two plays
- name: Create vm image for buildpool1
  hosts: buildpool_base
  gather_facts: false # <- added

  tasks:
  
  # here goes all the task necessary to provision an instance/VM

- name: Configure vm image for buildpool1
  hosts: buildpool_base
  gather_facts: true # <- note that gather_facts is now true and since true is the default value, we can also just remove the line

  tasks:
  
  # here goes all the tasks after instance/VM is up and running

Again, thank you so much!

I was unable to get the setup module to work, and before your post thought that “a second play” would have required a separate playbook file with a separate command-line invocation, which is the very thing I wanted to avoid.

However, because of your example, I was able to figure out how to do it in the same file, so single ansible-playbook invocation. Thank you!!!

I just had to insert this section after the vm is created and before doing any configs on it, and things worked, completely!

- name: Configure the buildpool1 base vm
  hosts: buildpool_base
  become: true
  vars:
    ansible_user: azureuser
  tasks:

Thanks again!

Mike

Glad to be of help :grin: