name 'module' is not defined with s3 module

When trying to run:

ansible --inventory /opt/wp/app/hegemon/ansible/hosts.prod --user mozart --private-key /root/.ssh/keys/prod p-backup01 -m s3

I get the error:

p-backup01 | FAILED >> { "failed": true, "msg": "Traceback (most recent call last):\n File \"<stdin>\", line 132, in <module>\nNameError: name 'module' is not defined\n", "parsed": false }

I’ve tested with other modules and it works just fine. I’m running Ansible 1.6.6.

Thanks.

I have the same error after upgrading to 1.7.1

Please make sure there is a bug filed in GitHub for this, include your ansible version information, and also the line in your playbook that triggered this.

Running the s3 module with no arguments should not be a thing, but we would also not want to present a traceback, and we like to correct tracebacks to human readable errors whenever possible.

You definitely do need to send it some arguments :slight_smile:

Thanks!

Just to follow up here was my full command.

/opt/wp/virtualenv/hegemon/bin/ansible --inventory /opt/wp/app/hegemon/ansible/hosts.prod --user mozart --private-key /root/.ssh/keys/prod p-backup01 -m s3 -a "aws_access_key=****** aws_secret_key=****** bucket=wp_confluence_backup src=/backup/confluence mode=put"

So the bug is that the s3 module doesn’t throw an error of missing args like the file module does:

-m file p-backup01 | FAILED >> { "failed": true, "msg": "missing required arguments: path" }

Where as the s3 module does not.

Unfortunately even with the arguments that I pasted above I get the same error as I would get without arguments.

p-backup01 | FAILED >> { "failed": true, "msg": "Traceback (most recent call last):\n File \"<stdin>\", line 132, in <module>\nNameError: name 'module' is not defined\n", "parsed": false }

Is this a problem on my end or should I include this in my bug report? (I’m assuming that the s3 module is working for others :wink:

I’ve opened a Github issue here: https://github.com/ansible/ansible/issues/8698

James Cammarata was kind enough to update the S3 module in the devel branch to handle errors better. Now when I run a command with the S3 module the error returned is:

p-backup01 | FAILED >> { "failed": true, "msg": "boto required for this module" }

It seems similar to this issue. I exported ANSIBLE_PYTHON_INTERPRETER as an environment variable and still received the same error.

export ANSIBLE_PYTHON_INTERPRETER=/opt/wp/virtualenv/hegemon/bin/python

I’m using a dynamic inventory so I’m unable to add it there and I would prefer to keep it out of group_vars. Does any one have any other suggestions?

This seems like you are possibly having boto installed in a virtualenv and need to set ansible_python_interpreter to the python that can find boto.

(Actually you just said this).

A “vars_file” include of “settings.yml” or a “vars” entry would be reasonable.

If using something like Tower, you can also add variables to the inventory that gets synced from your cloud provider.

Unfortunately this is an ad hoc command so there are no vars and I’m not using Tower. What is another way to insert this var into my command?

-e “ansible_python_interpreter=x” on the command line works, though you can actually still use group_vars/all

Just put this in a directory alongside your inventory script or playbook and it will load as expected.

Running an ad hoc command with ansible v1.7.1 returns

ansible: error: no such option: -e

ansible-playbook supports -e

Since this is an ad hoc command there are also no playbooks

I also tried updating the .ansible.cfg

`
[defaults]
host_key_checking = false
legacy_playbook_variables = false
nocows = true
roles_path = /opt/wp/app/hegemon/ansible/roles
transport = ssh
ansible_python_interpreter = /opt/wp/virtualenv/hegemon/bin/python

`

Unfortunately that didn’t help either.

I was install boto on the host I was running the command from not the server I was running the command against. After installing boto on the remote server things are working better now. Thank you.