Connecting to existing ec2 instances

Hi,

I recently inherited an Ansible deployment that manages some EC2 instances and I have a couple questions. I am unable to find a pem file/ssh key on the local file system, but somehow Ansible is able to connect to the instances it creates and run shell scripts. How is this possible? I’m not trying to enhance the existing playbooks to update some data via a script on a central/master server in our deployment every time we create a new instance, but I’m having trouble connecting to the instance. How can I use whatever mechanism is in place for the recently provisioned instances to access this single (relatively static) instance?

Thanks,

Nate

*typo: not → now. there’s a nice one-letter semantic change, lol.

My existing instances are provisioned thusly:

tasks:

  • name: Provision an instance
    ec2: >
    aws_access_key={{ec2_access_key}}
    aws_secret_key={{ec2_secret_key}}
    keypair={{mykeypair}}
    group_id={{security_group}}
    instance_type={{instance_type}}
    image={{image}}
    region={{region}}
    wait=true
    count=1
    vpc_subnet_id={{ subnet_name }}
    instance_tags=‘{“Name”:“{{ name }}”,“InternalName”:“{{ internal_name }}”}’
    register: ec2_info

  • debug: var=ec2_info

  • debug: var=item
    with_items: ec2_info.instance_ids

  • debug: var=item
    with_items: ec2_info.instances

  • name: add host to host list
    add_host: hostname={{ item.public_ip }} groupname=ec2hosts
    with_items: ec2_info.instances

  • name: wait for instances to listen on port:22
    wait_for:
    state=started
    host={{ item.private_dns_name }}
    port=22
    with_items: ec2_info.instances

  • hosts: ec2hosts
    gather_facts: True
    user: ec2-user
    sudo: True
    roles:

  • { role: common, XXXextra instance variablesXXX } <— this role (in main.yml) calls a number of scripts and shell commands on the instance. How can it connect? and how can I replicate that connection?

run ansible with -vvvvv and you should see the full ssh command used,
that should point at a key, if not you likely have an agent or the ssh
user/password info in inventory.

Thanks Brian. That’s some helpful debugging advice.

I would grep -r your ansible directory for mykeypair. Since it is a variable specified in your playbook, either you need to define that on the command line with extra-vars, or the name of the keypair is written as a variable in a file somewhere (vars file, hosts file,etc).

grep -r “mykeypair” /etc/ansible

Joanna

Joanna - thanks; I don’t see any references to that variable in /etc/ansible, and the only places I see it in my playbook directory are references to the amazon key name, no files.

I think the problem may be orthogonal to what I was thinking of yesterday, though. I tried running my existing Ansible script with the debug output, as suggested earlier - this also failed with an SSH key problem. On further investigation, I realized that Ansible Tower provides a credentials repository, so, It might ‘just work’ through the Tower interface. I was hoping to debug my script through the terminal, but since everything else we do runs through Tower, I’ll try testing through there.

Thanks for the help!

Nate

ah, that is totally different question, tower keeps it's credentials
in it's own store. Also for tower questions go to support.ansible.com
support@ansible.com