Hi All,
Need urgent help I am getting below error while creating S3 bucket using playbook.
AWS keys I have set up as Environmental variables.
Let me know if I am missing anything.
Below is the playbbok
Hi All,
Need urgent help I am getting below error while creating S3 bucket using playbook.
AWS keys I have set up as Environmental variables.
Let me know if I am missing anything.
Below is the playbbok
Not sure but could it be an issue that your environment vars are lowercase?
I know that the awscli tools expect them in uppercase.
Any update,this is little urgent for ne
Hi All,
Need urgent help I am getting below error while creating S3 bucket using playbook.
AWS keys I have set up as Environmental variables.
Let me know if I am missing anything.
Below is the playbbok
---
- hosts: localhost
tasks:
- name: Create an empty bucket
aws_s3:
aws_access_key: "{{ lookup('env','aws_key') }}"
aws_secret_key: "{{ lookup('env','aws_secret') }}"
You look up environment vars here
NoCredentialsError: Unable to locate credentials
fatal: [localhost]: FAILED! => {
"boto3_version": "1.9.212",
"botocore_version": "1.12.212",
"changed": false,
"invocation": {
"module_args": {
"aws_access_key": "",
"aws_secret_key": "",
But the lookup fails, they're NOT in your environment.
So, what have you actually done when you say "AWS keys I have set up
as Environmental variables. "?
Thanks for the response.
I have used export commands to set up both aws keys.
Let me know if I am missing anything here
Regards
Amit
I had something like this happen to me recently when using ‘become’ in my playbook. It may be you’re having a similar problem with your use of sudo.
You’re logged in as the ubuntu user (presumably where you have these env vars set…via a bash unit script or via export or something) but your use of sudo is causing the playbook to be executed as the root user when Ansible runs the play.
Presumably there are no env vars configured for root and thus the modules inability to find anything.
So suggest you export your env vars in the root users config. Alternatively, create .aws/config and .aws/credentials as the root user. Or, try removing your use of sudo if your org’s security policy allows
-tim
Thanks Tim.
I will try to run export commands using sudo and let you know.
If you have any idea on ansible vault please let me know, I tried to use that initially but I am not able to use vault file in my playbook
Amit
Hi Tim,
I tried to run export command using sudo but it says sudo expect command not found
Helllo All,
I am able to overcome the credentials issue however now playbook is failing with below issue.
root@ip-172-31-42-232:/etc/ansible# ansible-playbook s3.yml -vvv
ansible-playbook 2.8.3
config file = /etc/ansible/ansible.cfg
configured module search path = [u’/root/.ansible/plugins/modules’, u’/usr/share/ansible/plugins/modules’]
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible-playbook
python version = 2.7.12 (default, Nov 12 2018, 14:36:49) [GCC 5.4.0 20160609]
Using /etc/ansible/ansible.cfg as config file
host_list declined parsing /etc/ansible/hosts as it did not pass it’s verify_file() method
script declined parsing /etc/ansible/hosts as it did not pass it’s verify_file() method
auto declined parsing /etc/ansible/hosts as it did not pass it’s verify_file() method
Parsed /etc/ansible/hosts inventory source with ini plugin
PLAYBOOK: s3.yml *******************************************************************************************************************************************************
1 plays in s3.yml
PLAY [localhost] *******************************************************************************************************************************************************
TASK [Gathering Facts] *************************************************************************************************************************************************
task path: /etc/ansible/s3.yml:2
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: root
<127.0.0.1> EXEC /bin/sh -c ‘echo ~root && sleep 0’
<127.0.0.1> EXEC /bin/sh -c ‘( umask 77 && mkdir -p “echo /root/.ansible/tmp/ansible-tmp-1566666456.61-207096775443244
” && echo ansible-tmp-1566666456.61-207096775443244=“echo /root/.ansible/tmp/ansible-tmp-1566666456.61-207096775443244
” ) && sleep 0’
Using module file /usr/lib/python2.7/dist-packages/ansible/modules/system/setup.py
<127.0.0.1> PUT /root/.ansible/tmp/ansible-local-2270DRSES3/tmpP8YUvk TO /root/.ansible/tmp/ansible-tmp-1566666456.61-207096775443244/AnsiballZ_setup.py
<127.0.0.1> EXEC /bin/sh -c ‘chmod u+x /root/.ansible/tmp/ansible-tmp-1566666456.61-207096775443244/ /root/.ansible/tmp/ansible-tmp-1566666456.61-207096775443244/AnsiballZ_setup.py && sleep 0’
<127.0.0.1> EXEC /bin/sh -c ‘/usr/bin/python /root/.ansible/tmp/ansible-tmp-1566666456.61-207096775443244/AnsiballZ_setup.py && sleep 0’
<127.0.0.1> EXEC /bin/sh -c ‘rm -f -r /root/.ansible/tmp/ansible-tmp-1566666456.61-207096775443244/ > /dev/null 2>&1 && sleep 0’
ok: [localhost]
META: ran handlers
TASK [Create an empty bucket] ******************************************************************************************************************************************
task path: /etc/ansible/s3.yml:4
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: root
<127.0.0.1> EXEC /bin/sh -c ‘echo ~root && sleep 0’
<127.0.0.1> EXEC /bin/sh -c ‘( umask 77 && mkdir -p “echo /root/.ansible/tmp/ansible-tmp-1566666457.49-233501371669797
” && echo ansible-tmp-1566666457.49-233501371669797=“echo /root/.ansible/tmp/ansible-tmp-1566666457.49-233501371669797
” ) && sleep 0’
Using module file /usr/lib/python2.7/dist-packages/ansible/modules/cloud/amazon/aws_s3.py
<127.0.0.1> PUT /root/.ansible/tmp/ansible-local-2270DRSES3/tmpFtWTLO TO /root/.ansible/tmp/ansible-tmp-1566666457.49-233501371669797/AnsiballZ_aws_s3.py
<127.0.0.1> EXEC /bin/sh -c ‘chmod u+x /root/.ansible/tmp/ansible-tmp-1566666457.49-233501371669797/ /root/.ansible/tmp/ansible-tmp-1566666457.49-233501371669797/AnsiballZ_aws_s3.py && sleep 0’
<127.0.0.1> EXEC /bin/sh -c ‘/usr/bin/python /root/.ansible/tmp/ansible-tmp-1566666457.49-233501371669797/AnsiballZ_aws_s3.py && sleep 0’
<127.0.0.1> EXEC /bin/sh -c ‘rm -f -r /root/.ansible/tmp/ansible-tmp-1566666457.49-233501371669797/ > /dev/null 2>&1 && sleep 0’
The full traceback is:
Traceback (most recent call last):
File “/tmp/ansible_aws_s3_payload_SEbdSf/main.py”, line 384, in bucket_check
s3.head_bucket(Bucket=bucket)
File “/usr/local/lib/python2.7/dist-packages/botocore/client.py”, line 357, in _api_call
return self._make_api_call(operation_name, kwargs)
File “/usr/local/lib/python2.7/dist-packages/botocore/client.py”, line 661, in _make_api_call
raise error_class(parsed_response, operation_name)
ClientError: An error occurred (403) when calling the HeadBucket operation: Forbidden
fatal: [localhost]: FAILED! => {
“boto3_version”: “1.9.212”,
“botocore_version”: “1.12.212”,
“changed”: false,
“error”: {
“code”: “403”,
“message”: “Forbidden”
},
“invocation”: {
“module_args”: {
“aws_access_key”: “”,
“aws_secret_key”: “”,
“bucket”: “mybucket”,
“debug_botocore_endpoint_logs”: false,
“dest”: null,
“dualstack”: false,
“ec2_url”: null,
“encrypt”: true,
“encryption_kms_key_id”: null,
“encryption_mode”: “AES256”,
“expiry”: 600,
“headers”: null,
“ignore_nonexistent_bucket”: false,
“marker”: “”,
“max_keys”: 1000,
“metadata”: null,
“mode”: “create”,
“object”: null,
“overwrite”: “always”,
“permission”: [
“public-read”
],
“prefix”: “”,
“profile”: null,
“region”: “us-east-2”,
“retries”: 0,
“rgw”: false,
“s3_url”: null,
“security_token”: null,
“src”: null,
“validate_certs”: true,
“version”: null
}
},
“msg”: “Failed while looking up bucket (during bucket_check) mybucket.: An error occurred (403) when calling the HeadBucket operation: Forbidden”,
“response_metadata”: {
“host_id”: “Y5EoHU94wSLzLN+iN7SDshJFmR78udMNnDpxUI13jVTTLVP5RQCS5oEYjmpB8o5JhejR8cuAB4w=”,
“http_headers”: {
“content-type”: “application/xml”,
“date”: “Sat, 24 Aug 2019 17:07:37 GMT”,
“server”: “AmazonS3”,
“transfer-encoding”: “chunked”,
“x-amz-bucket-region”: “us-east-1”,
“x-amz-id-2”: “Y5EoHU94wSLzLN+iN7SDshJFmR78udMNnDpxUI13jVTTLVP5RQCS5oEYjmpB8o5JhejR8cuAB4w=”,
“x-amz-request-id”: “73D609B218DBD779”
},
“http_status_code”: 403,
“request_id”: “73D609B218DBD779”,
“retry_attempts”: 1
}
}
PLAY RECAP *************************************************************************************************************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
Below is the playbook i have written to create the bucket
@amit, Can you try with different bucket name once. Remember AWS S3 bucket names must be unique.
Hello All,
I tried to change the bucket name and ran my playbook, but its failing with same error.
I even specified the region as my EC2 instance but still its failing.
Below are my boto versions
boto3 (1.9.212)
botocore (1.12.215)
Please suggest if i am doing something wrong here. I have also attached s3 full access policy to my IAM user.
I am running this playbook as root user and my IAM user name is ansible.
root@ip-172-31-42-232:/etc/ansible# ansible-playbook s3.yml -vvv
ansible-playbook 2.8.3
config file = /etc/ansible/ansible.cfg
configured module search path = [u’/root/.ansible/plugins/modules’, u’/usr/share/ansible/plugins/modules’]
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible-playbook
python version = 2.7.12 (default, Nov 12 2018, 14:36:49) [GCC 5.4.0 20160609]
Using /etc/ansible/ansible.cfg as config file
host_list declined parsing /etc/ansible/hosts as it did not pass it’s verify_file() method
script declined parsing /etc/ansible/hosts as it did not pass it’s verify_file() method
auto declined parsing /etc/ansible/hosts as it did not pass it’s verify_file() method
Parsed /etc/ansible/hosts inventory source with ini plugin
PLAYBOOK: s3.yml *******************************************************************************************************************************************************
1 plays in s3.yml
PLAY [localhost] *******************************************************************************************************************************************************
TASK [Gathering Facts] *************************************************************************************************************************************************
task path: /etc/ansible/s3.yml:2
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: root
<127.0.0.1> EXEC /bin/sh -c ‘echo ~root && sleep 0’
<127.0.0.1> EXEC /bin/sh -c ‘( umask 77 && mkdir -p “echo /root/.ansible/tmp/ansible-tmp-1566690636.74-275478344374659
” && echo ansible-tmp-1566690636.74-275478344374659=“echo /root/.ansible/tmp/ansible-tmp-1566690636.74-275478344374659
” ) && sleep 0’
Using module file /usr/lib/python2.7/dist-packages/ansible/modules/system/setup.py
<127.0.0.1> PUT /root/.ansible/tmp/ansible-local-3920YJ0Dzu/tmppvx33n TO /root/.ansible/tmp/ansible-tmp-1566690636.74-275478344374659/AnsiballZ_setup.py
<127.0.0.1> EXEC /bin/sh -c ‘chmod u+x /root/.ansible/tmp/ansible-tmp-1566690636.74-275478344374659/ /root/.ansible/tmp/ansible-tmp-1566690636.74-275478344374659/AnsiballZ_setup.py && sleep 0’
<127.0.0.1> EXEC /bin/sh -c ‘/usr/bin/python /root/.ansible/tmp/ansible-tmp-1566690636.74-275478344374659/AnsiballZ_setup.py && sleep 0’
<127.0.0.1> EXEC /bin/sh -c ‘rm -f -r /root/.ansible/tmp/ansible-tmp-1566690636.74-275478344374659/ > /dev/null 2>&1 && sleep 0’
ok: [localhost]
META: ran handlers
TASK [Create an empty bucket] ******************************************************************************************************************************************
task path: /etc/ansible/s3.yml:4
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: root
<127.0.0.1> EXEC /bin/sh -c ‘echo ~root && sleep 0’
<127.0.0.1> EXEC /bin/sh -c ‘( umask 77 && mkdir -p “echo /root/.ansible/tmp/ansible-tmp-1566690637.52-181677931604258
” && echo ansible-tmp-1566690637.52-181677931604258=“echo /root/.ansible/tmp/ansible-tmp-1566690637.52-181677931604258
” ) && sleep 0’
Using module file /usr/lib/python2.7/dist-packages/ansible/modules/cloud/amazon/aws_s3.py
<127.0.0.1> PUT /root/.ansible/tmp/ansible-local-3920YJ0Dzu/tmpICazr3 TO /root/.ansible/tmp/ansible-tmp-1566690637.52-181677931604258/AnsiballZ_aws_s3.py
<127.0.0.1> EXEC /bin/sh -c ‘chmod u+x /root/.ansible/tmp/ansible-tmp-1566690637.52-181677931604258/ /root/.ansible/tmp/ansible-tmp-1566690637.52-181677931604258/AnsiballZ_aws_s3.py && sleep 0’
<127.0.0.1> EXEC /bin/sh -c ‘/usr/bin/python /root/.ansible/tmp/ansible-tmp-1566690637.52-181677931604258/AnsiballZ_aws_s3.py && sleep 0’
<127.0.0.1> EXEC /bin/sh -c ‘rm -f -r /root/.ansible/tmp/ansible-tmp-1566690637.52-181677931604258/ > /dev/null 2>&1 && sleep 0’
The full traceback is:
Traceback (most recent call last):
File “/tmp/ansible_aws_s3_payload_AzOF0F/main.py”, line 384, in bucket_check
s3.head_bucket(Bucket=bucket)
File “/root/.local/lib/python2.7/site-packages/botocore/client.py”, line 357, in _api_call
return self._make_api_call(operation_name, kwargs)
File “/root/.local/lib/python2.7/site-packages/botocore/client.py”, line 661, in _make_api_call
raise error_class(parsed_response, operation_name)
ClientError: An error occurred (403) when calling the HeadBucket operation: Forbidden
fatal: [localhost]: FAILED! => {
“boto3_version”: “1.9.212”,
“botocore_version”: “1.12.215”,
“changed”: false,
“error”: {
“code”: “403”,
“message”: “Forbidden”
},
“invocation”: {
“module_args”: {
“aws_access_key”: “”,
“aws_secret_key”: “”,
“bucket”: “vinali”,
“debug_botocore_endpoint_logs”: false,
“dest”: null,
“dualstack”: false,
“ec2_url”: null,
“encrypt”: true,
“encryption_kms_key_id”: null,
“encryption_mode”: “AES256”,
“expiry”: 600,
“headers”: null,
“ignore_nonexistent_bucket”: false,
“marker”: “”,
“max_keys”: 1000,
“metadata”: null,
“mode”: “create”,
“object”: null,
“overwrite”: “always”,
“permission”: [
“public-read”
],
“prefix”: “”,
“profile”: null,
“region”: “us-east-2”,
“retries”: 0,
“rgw”: false,
“s3_url”: null,
“security_token”: null,
“src”: null,
“validate_certs”: true,
“version”: null
}
},
“msg”: “Failed while looking up bucket (during bucket_check) vinali.: An error occurred (403) when calling the HeadBucket operation: Forbidden”,
“response_metadata”: {
“host_id”: “HynfxcD919dq4ThF71VTbvEHK5lTdSLqJtDqrLf1SCSaJAWzg7K4CRB5qzOHQH5bGsPSpkM28rM=”,
“http_headers”: {
“content-type”: “application/xml”,
“date”: “Sat, 24 Aug 2019 23:50:37 GMT”,
“server”: “AmazonS3”,
“transfer-encoding”: “chunked”,
“x-amz-id-2”: “HynfxcD919dq4ThF71VTbvEHK5lTdSLqJtDqrLf1SCSaJAWzg7K4CRB5qzOHQH5bGsPSpkM28rM=”,
“x-amz-request-id”: “103457AA674E483D”
},
“http_status_code”: 403,
“request_id”: “103457AA674E483D”,
“retry_attempts”: 0
}
}
PLAY RECAP *************************************************************************************************************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
Regards
Amit
I haven’t had a close look, but I think the problem is that while you are logged in as ansible, your playbook thn connects to localhost as root. and root does not have AWS credentials set up. From your output:
[…]
ESTABLISH LOCAL CONNECTION FOR USER: root
[…]
NoCredentialsError: Unable to locate credentials
[…]
In general, when running Ansible on a local host to change AWS resources (rather than running a play on a remote host), you don’t need to become a different user. So try just running the playbook on localhost as user “ansible” (assuming that you have AWS credentials set up for “ansible” of course.
Alternatively, log in as root and set up AWS credentials in the root account on localhost. Not really a recommended approach.
For most of my playbooks that work with AWS infrastructure, I run them as a user with suitable AWS credentials, and the playbooks start like this:
I was able to over the credentials issue
But getting 403 error as head bucket forbidden issue.
Regards
Amit
More info needed. Check the credentials you are using and the permissions they provide.
Regards, K.
IAM user has full permission on S3.
Let me know what additional information needed.
Regards
Amit
If you are getting a 403 error, then either you don’t have the permissions you think you do, or you are not accessing AWS as the user you think you are.
Carry out the desired operation using the command line while logged in (to localhost) as the user you think Ansible is using. If that works, then you 100% are using a different user in Ansible. If it doesn’t work, then you don’t have the permissions you think you do.
For example, while logged in locally as “ansible”:
aws s3 mb s3://this_is_amits_bucket
aws s3api head-bucket --bucket this_is_amits_bucket
If the bucket already exists, just use the second command.
Depending on how you have set up your AWS credentials, you may need to add “–profile whatever” to the commands, and possibly also “–region whatever”.
One other possibility is that the credentials the “ansible” user is using are set up with MFA in AWS. If that’s the case, the above commands will prompt you for an MFA code.
Regards, K.
Thanks Karl.
I having this confusion.
I am logging in by root user however IAM user is ansible and I am using it’s credentials for export.
How to rectify this?
Do you suggest to create root as user in IAM to avoid the confusion
Regards
Amit
So I understand that these things are true:
1: You are logged into localhost as “root”
2: You are running Ansible as local user “root”
3: There is an IAM user called “ansible”
4: IAM user “ansible” has the necessary permissions
You have set up a suitable user in AWS (“ansible”). now you have to make sure that the local user running Ansible (in your case “root”) has access to the credentials locally, so that it can supply them to AWS as needed.
There is no need to create any new IAM users.
Typically you would (as the user running Ansible on your local host, so in your case as “root”) run “aws configure” and then ensure that the right credentials are in ~root/.aws/config and ~root/.aws/credentials.
You do not HAVE to install the AWS CLI to use Ansible. One alternative is to set all the required environment variables in your shell before running Ansible. At a minimum you need these:
export AWS_ACCESS_KEY_ID=xxxxxxxxxx
export AWS_SECRET_ACCESS_KEY=yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy
Many things expect AWS_REGION as well.
All this is extensively documented in the Ansible documentation.
There is little point trying to run Ansible until you have tested that the user you are running Ansible as (in your case root) can provide the required AWS access credentials. For this reason I suggest installing the AWS CLI sure that you can do simple things like create and list buckets.
Regards, K.
Thank you so much.
I will try it and give you feedback
Have a good night
Regards
Amit
Hi
I have gone through the reply.
I am using export commands already but despite that I am facing headbucket issue and error code is 403
I have also installed AWS cli however it’s still not recognizing aws command and I am getting aws command not found error when I am running any aws commands
Let me know if any suggestions from your end
Regards
Amit