Hi,
I have this weird situation where a role doesn’t pick up configured variables, hopefully someone can help me out.
This is parts of my playbook:
- name: Deploy Elasticsearch VMs
hosts: localhost
tags: - deploy
vars:
os_type: Windows
public_ip: yes
use_max_datadisks: True
create_network_security_group: nsg_eslogging
create_availability_set: yes
add_to_adhoc_group: elasticsearch
roles:
-
{ role: customer_deploy_azurevm, vm_name: customer-prod-es1}
-
{ role: customer_deploy_azurevm, vm_name: customer-prod-es2}
-
name: Deploy logstash VMs
hosts: localhost
tags: -
deploy
vars:
os_type: Windows
public_ip: yes
max_data_disk_count: 2
create_network_security_group: nsg_logstash
create_availability_set: yes
availability_set_name: as-customer-prod-logstash
add_to_adhoc_group: logstash
roles: -
{ role: customer_deploy_azurevm, vm_name: customer-prod-ls1}
-
{ role: customer_deploy_azurevm, vm_name: customer-prod-ls2}
The weird thing that happens is that the second play’s vms don’t get the correct availability set (availability_set_name), but they DO get the correct nsg (create_network_security_group). The customer_deploy_azurevm roles works so that if “create_availability_set” is true and “availability_set_name” is not set, then an autogenerated availability set name will be used. This variable seems to “linger” so that the following to vms get the previous auto-generated availability set name (I can see this if I dump all vars before I do anything else in the role)
This seems completely weird to me. Am I doing something wrong, or is this a bug?