Recommended way for remote host to pull from s3: authentication questions

I’m trying to use the ‘s3’ module to pull down large binary installers onto remote hosts and having confusion about how best to authenticate the remote host to AWS.

I was hoping I could just use my environment’s AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY, but they’re not passed to the remote; not surprising. Then I thought I could pass them to the remote by referencing in the action, but it doesn’t expand the environment variables.

I can pass in the literal key and secret strings and the remote can then access the s3 object, but don’t want those in my git repo for the playbook. For example:

  • name: Download installer
    hosts: all
    user: ec2-user
    sudo: yes
    tasks:
  • name: TEST S3 copy on remote host
    action:
    module: s3
    mode: get
    bucket: mybucket-iaas-256

object: /installers/oracle/11.2/oracle-instantclient11.2-basic-11.2.0.3.0-1.x86_64.rpm
dest: /tmp/ORA_TEST.bin

Failed:

#aws_access_key: ${AWS_ACCESS_KEY_ID}
#aws_secret_key: ${AWS_SECRET_ACCESS_KEY}

Works but don’t want actual keys in git:

aws_access_key: XYZZYPLOUGHPLOVER
aws_secret_key: youwontgetitupthestairsyouareinamazeoftwistylittlepassages

Am I missing a common pattern on how to pass AWS creds to the remote hosts?

Thanks for any clues.

Your variables should interpolate fine.

I don’t think they are explicitly removed so I suspect they are not defined.

Also: please don’t use old style variables.

Got it, I was missing the $ENV variable to get at my environment, e.g.:

  • name: Oracle Instant Client Basic copy
    action:
    module: s3
    mode: get
    aws_access_key: $ENV(AWS_ACCESS_KEY_ID)

aws_secret_key: $ENV(AWS_SECRET_ACCESS_KEY)

Thanks

The EC2 and S3 modules should pull from the environmental variables if these items are not explicitly set in the playbook. But the variables it looks for is EC2_ACCESS_KEY and EC2_SECRET_KEY as of Ansible 1.2.3 from the Ubuntu PPA.

Ugh, sorry, since you are able to pull down files from S3 using the S3 module I assume you are using 1.3 because 1.2.3 seems to support “put” for ensuring a “present” state.

So, 1.3 also pulls from AWS_ACCESS_KEY and AWS_SECRET_KEY in addition to the previously mentioned variables (they have been kept there for backwards compatibility.) The point is still that S3 pulls from your environment variables automatically if not set explicitly in the playbook. So, that might save you a couple of lines in your playbook.

I’m running 1.3.3 from the ubuntu ppa, and I find similar behavior to Chris Shenton - that the AWS creds in local environment variables aren’t pushed to the remotes.

I need to explicitly capture the variables:

aws_access_key=$ENV(AWS_ACCESS_KEY) aws_secret_key=$$ENV(AWS_SECRET_KEY)

… and then it works fine.

They are not meant to be pushed to the remotes.

You use the ec2 modules with “local_action: ec2 …” as per the examples, and it will be fine.

Otherwise you could just pass the credentials as parameters to the module if you want to execute them on another host.