How to use ec2.py with group_vars etc?

Hi

I can see the benefits of using the ec2.py dynamic inventory script,
but I run into some issues which I can't figure out how to fix.

1. The ec2.py requires the credentials to be available as environment
variables. But my deployment only has them available inside a vaulted
vars file (in host_vars/localhost, so they can be used by a previous
ansible role that creates infrastructure). How do people handle the
storage of credentials?

2. Up to now my inventory looks pretty simple:

[proxy]
proxy1 ansible_host=10.20.1.16
proxy2 ansible_host=10.20.1.17

[web]
web1 ansible_host=10.20.1.24
web2 ansible_host=10.20.1.25

[vars:all]
ansible_user=admin ansible_ssh_common_args='-o ProxyJump="admin@3.112.14.198"'

I've set things up so that the AWS instance names are the same as my
old inventory (i.e. proxy1, proxy2, web1, web2).
So, I can successfully ping instances by their name, for instance
using this syntax:

(ansible-2.7.12) dick.visser@nuc8 scripts$ ansible -i ec2.py
tag_Name_proxy* -m ping
10.20.1.16 | SUCCESS => {
    "changed": false,
    "ping": "pong"
}
10.20.1.17 | SUCCESS => {
    "changed": false,
    "ping": "pong"
}

But how do I set up the groups now?
Do i have to assign a "group" tag to the instance in AWS first with
value 'web', 'proxy', etc?

Ideally I'd like to keep the 'simple' group name like web, proxy, etc.

thx!!

instances I create, let's say 'Type=proxy'.
Using the dynamic inventory I can target those instances by using
'tag_type_proxy'.
This then means I will have to use 'group_vars/tag_type_proxy' to put
any specific vars for this group of hosts.
However, this deployment is going to be used for both AWS and non-AWS
infrastructure.

I could just symlink:

group_vars/proxy => group_vars/tag_type_proxy

Is that something that people do?
Alternatively I could rewrite the ansible code to expect
'group_type_proxy' instead of 'proxy' but that seem convoluted...

thx

DIck

Dick -

Here’s how I do it. I defined the “instances” dictionary in the playbook under vars, but you can do it with extra_vars, or whatever works.

- name: Launch Instance
  ec2:
    instance_type: "{{ item.ec2_server_instance_type }}"
    group: "{{ item.ec2_server_security_group }}"
    image: "{{ item.ec2_server_image }}"
    wait: yes
    region: "{{ item.ec2_server_region }}"
    key_name: "{{ ec2_server_keypair }}"
    aws_access_key: "{{ ec2_server_access_key }}"
    aws_secret_key: "{{ ec2_server_secret_key }}"
    exact_count: "{{item.exact_count|default(1)}}"
    vpc_subnet_id: "{{vpc_subnet_ids['awsdev-app-1']}}"
    count_tag:
      Name: "{{item.ec2_server_name }}"
    instance_tags:
      Name: "{{ item.ec2_server_name }}"
      class: "{{ item.class|default(omit) }}"
      Type: "{{ item.type|default(omit) }}"
  with_items:
    - "{{instances}}"
  register: ec2
  become: no

- name: gather facts about ec2 instances in case they were created a while ago
  ec2_instance_facts:
    aws_access_key: "{{ ec2_server_access_key }}"
    aws_secret_key: "{{ ec2_server_secret_key }}"
    region: "{{ item.ec2_server_region }}"
    filters:
#      "tag:Name": "{{item.ec2_server_name}}"
      "tag:class": "{{item.class}}"
      "instance-state-name": "running"
  with_items:
    - "{{instances}}"
  register: ec2
  no_log: true
  become: no

Hope that makes sense.

-Adam