Understanding EC2 Keypairs

Hello,

Thanks for Ansible, so far I am enjoying it but have hit an SSH authentication issue that I am trying to understand.

Ansible version:

ansible 2.4.1.0

I am trying to create an EC2 instance and run some initial configuration commands on the new instance (using a dynamic inventory). So far everything works well up until I try to SSH to the new instance to run the initial configuration commands at which point I get a permission denied public key error for the SSH connection.

I would like to know what is considered the best practice for configuring a new EC2 instance or if there is some easy way for me to modify the existing playbook to do what I want.

The full playbook is:

---
- name: Provision database servers
hosts: localhost
connection: local
gather_facts: false
vars:
vpc_id: vpc-e0311a87
subnet_id: subnet-eaa88aa3
ami_id: ami-760aaa0f
aws_region: eu-west-1
profile: XXXX

tasks:
- name: Create database server keypair
ec2_key:
region: "{{ aws_region }}"
profile: "{{ profile }}"
name: db-servers
register: keypair

- name: Create security group for database servers
ec2_group:
profile: "{{ profile }}"
name: "postgres-ssh"
description: "Database security group"
vpc_id: "{{ vpc_id }}"
region: "{{ aws_region }}"
rules:
- proto: tcp
from_port: 22
to_port: 22
cidr_ip: 0.0.0.0/0
- proto: tcp
from_port: 5432
to_port: 5432
cidr_ip: 0.0.0.0/0
register: aws_sg

- name: Provision database servers
ec2:
profile: "{{ profile }}"
key_name: db-servers
instance_type: t2.micro
image: "{{ ami_id }}"
region: "{{ aws_region }}"
vpc_subnet_id: "{{ subnet_id }}"
group_id: "{{ aws_sg.group_id }}"
wait: true
exact_count: 1
count_tag:
Name: Database
instance_tags:
Name: Database
assign_public_ip: yes
register: ec2

- name: Add new instances to host group
add_host:
name: "{{ item.public_ip }}"
groups: postgres
with_items: "{{ ec2.tagged_instances }}"

- name: Wait for SSH to come up
wait_for:
host: "{{ item.public_dns_name }}"
port: 22
delay: 15
timeout: 320
state: started
with_items: "{{ ec2.tagged_instances }}"
#- debug:
#var: keypair

- name: Configure database instances
hosts: postgres
user: ec2-user
gather_facts: false
roles:
- postgres

The error occurs when I get to the Configure database instances task.

At this point I am assuming the public key has been configured on the new EC2 instance and I need to specify the correct private key (generated via ec2_key) when I try to SSH to the server to run the configuration role.

Can somebody please advise how I should fix this issue?

Thanks for any help.

Just to let you know that I have managed to get this working by generating the keypair manually and loading the public key using the ec2_key module and configuring the ansible_private_key_file variable.

I was hoping that the ec2_key module would generate a keypair that would upload the public key to the new EC2 instance and then I could save out the private key locally but I am guessing the module is not designed to do this (it only appears to generate a private key).

If anyone could point me to some good documentation on how to manage keypairs with AWS/EC2 it would be really useful.

Thanks.