I’m trying to use the ‘s3’ module to pull down large binary installers onto remote hosts and having confusion about how best to authenticate the remote host to AWS.
I was hoping I could just use my environment’s AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY, but they’re not passed to the remote; not surprising. Then I thought I could pass them to the remote by referencing in the action, but it doesn’t expand the environment variables.
I can pass in the literal key and secret strings and the remote can then access the s3 object, but don’t want those in my git repo for the playbook. For example:
name: Download installer
hosts: all
user: ec2-user
sudo: yes
tasks:
name: TEST S3 copy on remote host
action:
module: s3
mode: get
bucket: mybucket-iaas-256
The EC2 and S3 modules should pull from the environmental variables if these items are not explicitly set in the playbook. But the variables it looks for is EC2_ACCESS_KEY and EC2_SECRET_KEY as of Ansible 1.2.3 from the Ubuntu PPA.
Ugh, sorry, since you are able to pull down files from S3 using the S3 module I assume you are using 1.3 because 1.2.3 seems to support “put” for ensuring a “present” state.
So, 1.3 also pulls from AWS_ACCESS_KEY and AWS_SECRET_KEY in addition to the previously mentioned variables (they have been kept there for backwards compatibility.) The point is still that S3 pulls from your environment variables automatically if not set explicitly in the playbook. So, that might save you a couple of lines in your playbook.
I’m running 1.3.3 from the ubuntu ppa, and I find similar behavior to Chris Shenton - that the AWS creds in local environment variables aren’t pushed to the remotes.