Delegate_to EC2 AWS not working as expected

Hi

I’m trying to install mdatp using this .yml which is run via a .sh where the variables come from.

The problem is I don’t succeed in using the delegate_to: all (so I can run on all the instances under the account)

I basically querry the instance_ids, the account number and using these I have to run the mdatp edr tag set command

I tried also creating an ansible group with all the EC2 instances in the account but didn’t get anywhere.

I want to mention also that the item.tags.Name from AWS are different from my ansible host inventory

  - name: Retrieve and tag EC2 instances
    hosts: localhost
    gather_facts: no
    #become: yes
    vars_files:
    - vars.yml

    tasks:

    - name: "Get EC2 instance ID - {{ec2_instance}}"
      ec2_instance_info:
        profile: "{{aws_profile}}"
        region: "{{region}}"
        filters:
          "instance-state-name": [ "running" ]
      register: output

    - debug: var=output

    - set_fact:
        ec2_instance: "{{ output.instances | map(attribute='instance_id') | list }}"
    - debug: var=ec2_instance

    - name: Run mdatp edr tag set on each EC2 instance
      ansible.builtin.shell: |
        if command -v mdatp &>/dev/null; then
          mdatp edr tag set --name GROUP --value "MY-AWS-TAG - {{ item.instance_id }} - {{ aws_profile_number }}"
          echo "Tag set for {{ item.instance_id }} " >> /tmp/mdatp_success.txt
        else
          echo "mdatp is not installed on instance {{ item.instance_id }} " >> /tmp/mdatp_output_error.txt
        fi
      loop: "{{ output.instances }}"
      when:
      - item.state.name == 'running'
      - item.tags.Name is defined
      - "'master' not in item.tags.Name"
      - "'nodes' not in item.tags.Name"
      - "'prometheus' not in item.tags.Name"
      - "'octo' not in item.tags.Name"
      - "'eks' not in item.tags.Name"
      - "'tc' not in item.tags.Name"
      - "'om-' not in item.tags.Name"
      delegate_to: all

I don’t understand the logic here … why use delegate_to at all?

Set the playbook up to be able to run against a host (such as hosts: "{{ hosts_used }}" and feed it a hostname when you execute it), and the play will run against the remote server…

Am I missing something?

Just use aws_ec2 dynamic inventory to get and filter al your wanted instances and apply the one task against them …without loop.

delegate_to must be an individual host, for example:

    - name: Run a command against each EC2 instance
      command: whoami
      delegate_to: "{{ item }}"
      loop: "{{ output.instances | map(attribute='public_dns_name') }}"
      vars:
        # refer to ansible.builtin.ssh documentation for more configuration options
        ansible_user: "{{ ec2_user }}"
        ansible_private_key_file: "{{ ec2_key_file }}"

Here’s an example inventory configuration file for the aws_ec2 dynamic inventory:

# aws_ec2.yml
plugin: amazon.aws.aws_ec2
aws_profile: default
regions:
  - us-east-1
filters:
  "instance-state-name": "running"
compose:
  # create additional variables based on the EC2 instance attributes using Jinja2 expressions
  # you need to adjust these for your environment
  ansible_host: public_dns_name  
  ansible_user: '(tags.name == "OpenBSD") | ternary("openbsd", "ec2-user", "ec2-user")'
  ansible_private_key_file: '"~/.ssh/aws_keys/ec2.pem"'  # use double quotes like this to set a simple string value

To debug the EC2 instances collected, run ansible-inventory -i aws_ec2.yml --graph. To debug a host variable, use --list instead of --graph.

In the play, set hosts: all or hosts: aws_ec2, and ansible-playbook -i aws_ec2.yml playbook.yml will run the tasks against the EC2 instances.