Hello, I’m using ansible to launch and provision new ec2 instances and we noticed some strange behavior while launching different types of instances. I tried provisioning multiple m3.large instances while also adding additional EBS volumes (gp2 type, 12GB and 100GB). We noticed that instances came up with all the volumes attached but after a couple of seconds instances were shutdown and then immediately terminated. State transition reason had the following error: Server.InternalError: Internal error on launch. At first I thought that it might be related to some sort of ec2 service restrictions but I then tried to create exactly the same instance using AWS GUI and it worked. I then tried the same setup with t2.micro and it also worked perfectly fine, instance came up with all the volumes with no issues. I also tried to provision m3.large with no additional volumes and this also worked as expected.
Playbook I’m using:
- name: Provision {{count}} instance(s) in {{region}}
hosts: localhost
gather_facts: False
vars_files: - vars/credentials.yml
- “vars/{{ region }}.yml”
tasks:
-
name: Create new ec2 key pair with ansible public key
ec2_key:
name: ansible
key_material: “{{ item }}”
region: “{{region}}”
aws_access_key: “{{ ec2_access_key }}”
aws_secret_key: “{{ ec2_secret_key }}”
with_file: /root/.ssh/id_rsa.pub -
name: Launch instance
ec2:
aws_access_key: “{{ ec2_access_key }}”
aws_secret_key: “{{ ec2_secret_key }}”
key_name: ansible
group: “{{ ec2_group }}”
instance_type: m3.large
image: “{{ ec2_image }}”
wait: true
count: “{{ count }}”
vpc_subnet_id: “{{ ec2_vpc_subnet_id }}”
assign_public_ip: yes
count: “{{ count }}”
region: “{{region}}”
instance_tags:
Name: ec2-{{ region }}-node
volumes: -
device_name: /dev/xvdb
device_type: gp2
volume_size: 12
delete_on_termination: true -
device_name: /dev/xvdf
device_type: gp2
volume_size: 100
delete_on_termination: true
register: ec2
Information that might be helpful: ansible version: 1.9.1 boto version: 2.38.0 Any suggestions on how to solve this?
Thanks!