Ansible getting host from multiple groups in role

Hi,

I’m working on creating some provisioning code to create and delete stacks in EC2. So far I have four pieces of code which I’m working on

  1. An ansible role which creates infrastructure like a VPC and security groups
  2. An ansible role which creates stack layers, the same code can be run multiple times with a layer name to make multiple layers of a stack
  3. An ansible role which tears down stack layers, the same code can be run multiple times with a layer name to remove a particular layer
  4. An ansible role which deletes infrastructure like the VPC and security groups

I also am passive a variable to the stack creation called aws_environment which I can use to manage multiple stack at once (i.e. have a staging, development and production stack as different environment)

1 and 2 were more difficult than I an anticipated but I got working fairly well but I’m totally stuck on 3 and need some help.

I’m at a step in the 3rd role where I want to detach instances from the load balancer. I need to be able to generate a list of instances in the matrix of Environment X Layer, where Environment is passed as a variable to ansible-playbook and the layer name is a variable defined as layer_name for each call of the delete_layer role.

I need to then use that list to remove the instance from the ELB like so:

- name: “{{ layer_name }} layer - Detach instance from ELB”
local_action:
module: ec2_elb
state: absent
region: “{{ aws_region }}”
instance_id: “{{ item.id }}” ← HOW DO I GET THIS
ec2_elbs: “{{ layer_name }}” ← The ELB for the layer is named the same as the layer itself
with_items: STUCK
when: attach_elb == True

Can I perhaps make a fact that I can then iterate over somehow?

My solution so far is to run the play against all nodes and then filter using a when statement against the resources I want to affect.

e.g.

  • name: “{{ layer_name }} layer - Detach instance from ELB”
    local_action:
    module: ec2_elb
    state: absent

Having to specify credentials is a bug https://github.com/ansible/ansible/issues/9984

aws_access_key: “{{ lookup(‘env’, ‘AWS_ACCESS_KEY_ID’) }}”
aws_secret_key: “{{ lookup(‘env’, ‘AWS_SECRET_ACCESS_KEY’) }}”
region: “{{ aws_region }}”
instance_id: “{{ ansible_ec2_instance_id }}”
ec2_elbs: “{{ layer_name }}”
when:
attach_elb == True and
hostvars[inventory_hostname].ec2_tag_Environment == aws_environment and
hostvars[inventory_hostname].ec2_tag_Layer == layer_name

Messy but it works

For anyone that stumbles across this in the future I also found running the ec2_facts module against the nodes useful to see what facts are available (which is where ansible_ec2_instance_id comes from)