AWS users unite!!

I just started on AWS, coming from an existing Ansible-managed “smaller provider” environment.

The biggest issue I have yet to get a handle on is upgrading a running site. My current approach is going to be:

  1. keep a maintenance instance around that gets upgraded via Ansible; has an Elastic IP
  2. After upgrading the maintenance instance, save the AMI
  3. Bring down the running application instances
  4. upgrade the database if applicable
  5. Bring up the application instances on the newly-updated AMI

Any issues or peril in this approach? Is there an easier way to do things?

Regards,
-scott

Why not manage them like regular machines?

Why not manage them like regular machines?

  1. It’s quicker with a lot of instances: do the update once, then just restart them all with the new AMI.

  2. I have to update the AMI regardless unless I want to persist the instance storage, which isn’t necessary since there’s nothing on the machine to keep. It’s all in the cloud DB.

  3. AWS instances get a random IP address when they restart, unless you assign one manually in a VPC. I could do that, but I’d still need to have a maintenance instance with an external IP address in that VPC to be able to get to them.

Regards,
-scott

Hello Michael/Peter,

I work for AWS. While I am not an expert, I think I like where Peter is going here.

CloudFormation is a great way to provision the infrastructure stack. Application stack, as other suggested, while depends on preference of tool/language/choice. You can automate both - infrastructure and application stack from within CloudFormation (cloud-init using user-data).

As highlighted by John, there are several approaches to designing your AMIs - I call 3 “pizza models” :slight_smile:

  1. Frozen/Ready made Pizza: Output an Inventory of fully baked AMIs with every major build - Easier to Setup, Fast (Netflix’s approach)
  2. Take N’ Bake Pizza: Maintain “Golden” or Base AMIs and fetch the build and dependencies and bundle AMIs manually when you need - Good intermediate approach
  3. Made to Order Pizza: AMIs with JeOS + agent (Chef or Puppet or Ansible-pull) _ cloud-init - Fetch on boot time – More Control, Easier to maintain but Slow

I was wondering whether there is anything we could use for #2 in which I could launch an instance, “execute” a playbook, create/bundle/register an AMI from a base AMI (or Amazon’s published AMI) and shut the instance down - all with one command - that we can run from our sandbox that runs ansible. This will output the AMI-ID. We can feed this AMI-ID to CloudFormation stack as an parameter and AutoScaling’s LaunchConfig and modify when needed.

I am not an ansible expert either so apologize if this is already discussed and available today.

Jin

We already have merged in a nice module for saving AMIs you launch with Ansible in 1.3

You should check it out!

Here’s the module in question.

Newly added, upgrades welcome.

http://www.ansibleworks.com/docs/modules.html#ec2-ami

I was wondering whether there is anything we could use for #2 in which I could launch an instance, “execute” a playbook, create/bundle/register an AMI from a base AMI (or Amazon’s published AMI) and shut the instance down - all with one command - that we can run from our sandbox that runs ansible. This will output the AMI-ID. We can feed this AMI-ID to CloudFormation stack as an parameter and AutoScaling’s LaunchConfig and modify when needed.

It wouldn’t be difficult at all to do this with the both the ec2 module and the ec2_ami module I think, something like will probably work:

  • name: Launch instance
    local_action:
    module: ec2
    keypair: “{{keypair}}”
    group: “{{security_group}}”
    instance_type: “{{instance_type}}”
    image: “{{image}}”
    wait: true
    region: “{{region}}”
    instance_tags: “{{instance_tags}}”
    register: ec2

  • name: Add new instance to host group
    local_action: add_host hostname=${item.public_ip} groupname=launched
    with_items: ${ec2.instances}

  • name: Configure instance(s)
    hosts: launched
    sudo: True
    gather_facts: True

… configure the instance

  • name: Create the AMI
    local_action:

module: ec2_ami
instance_id: “{{ item }}”

wait: yes
name: test
register: instance
with_items: ec2_instance_ids

  • name: Terminate instances
    hosts: localhost
    connection: local
    tasks:
  • name: Terminate instances that were previously launched
    local_action:
    module: ec2
    state: ‘absent’
    instance_ids: ${ec2.instance_ids}

Exactly, this is the kind of stuff I want to collect for the AWS chapter in the docs. It would be really cool to see that start being built out for these kind of use cases.

I also know Lester had a nice example of building an AMI locally at AnsibleFest, and I’ve seen examples of it being done with the chroot plugin as well.

BTW, this can/should be updated:

  • name: Add new instance to host group
    local_action: add_host hostname=${item.public_ip} groupname=launched
    with_items: ${ec2.instances}

Becomes:

  • name: Add new instance to host group
    local_action: add_host hostname={{item.public_ip}} groupname=launched
    with_items: ec2.instances

Creating an AMI from a running instance is certainly possible, but it is carries with it a number of potential issues, particularly is the AMI is shared or made public. The biggest one is files left on the disk from the AMI creation when launching a new instance.

All of the log files from the AMI creation will be on every instance that is launched from that AMI. If those log files contain anything sensitive in them, that information will be accessible by any new AMI as well. This goes for more than just log files - bash history for example.

It can also cause confusion when debugging a newly launched instance, as there are log files (possibly rotated) that exist for some other machine that no longer exists.

Please have a look at this article from a while back about this and other concerns:
http://alestic.com/2011/06/ec2-ami-security

Attaching an EBS volume to the AMI creation instance, doing all of the configuration in a chroot’ed environment on that volume, and then creating the AMI from the EBS volume can get around many of these issues. Aminator and Eric’s alestic-git-build-ami does this. I am working on an Ansible provisioner for Aminator now, and that should be released soon.

One possible use-case for this might be creating a private AMI in a jenkins
job as a reference to use for autoscaling.
Then, when ansible-pull is run on bring up most of the conifguration would
already be done.