HELP: Problem with 'become' and pbrun

I’m relatively experienced with Ansible 1.3, but just now trying to bring Ansible 2.0 for the first time in a new project (and hoping to displace chef). I Have round 1k servers to manage that use pbrun, but others installed and control pbrun,
I have traditional sudo in a few of these hosts as well, but pbrun is the preferred privilege elevation method

I use all ssh-config auth in the following example.

HELP - I really need to figure this out, as ansible will be mostly useless to me unless I can reliably use it with pbrun

$ ansible all -i myhosts -o -m shell -a ‘uptime’ -b --become-method pbrun
c00413.mydom.com | FAILED! => {“changed”: false, “failed”: true, “module_stderr”: “”, “module_stdout”: “/bin/bash: pbrun: command not found\r\n”, “msg”: “MODULE FAILURE”, “parsed”: false}
c00414.mydom.com | FAILED! => {“changed”: false, “failed”: true, “module_stderr”: “”, “module_stdout”: “/bin/bash: pbrun: command not found\r\n”, “msg”: “MODULE FAILURE”, “parsed”: false}
c00415.mydom.com | FAILED! => {“changed”: false, “failed”: true, “module_stderr”: “”, “module_stdout”: “/bin/bash: pbrun: command not found\r\n”, “msg”: “MODULE FAILURE”, “parsed”: false}
c00416.mydom.com | FAILED! => {“changed”: false, “failed”: true, “module_stderr”: “”, “module_stdout”: “/bin/bash: pbrun: command not found\r\n”, “msg”: “MODULE FAILURE”, “parsed”: false}
c00417.mydom.com | FAILED! => {“changed”: false, “failed”: true, “module_stderr”: “”, “module_stdout”: “/bin/bash: pbrun: command not found\r\n”, “msg”: “MODULE FAILURE”, “parsed”: false}
c00418.mydom.com | FAILED! => {“changed”: false, “failed”: true, “module_stderr”: “”, “module_stdout”: “/bin/bash: pbrun: command not found\r\n”, “msg”: “MODULE FAILURE”, “parsed”: false}

$ ansible all -i myhosts -o -m shell -a ‘uptime’ -b --become-method ‘/opt/pb/bin/pbrun’
c00413.mydom.com | FAILED! => {“failed”: true, “msg”: “Privilege escalation method not found: /opt/pb/bin/pbrun”}
c00414.mydom.com | FAILED! => {“failed”: true, “msg”: “Privilege escalation method not found: /opt/pb/bin/pbrun”}
c00415.mydom.com | FAILED! => {“failed”: true, “msg”: “Privilege escalation method not found: /opt/pb/bin/pbrun”}
c00416.mydom.com | FAILED! => {“failed”: true, “msg”: “Privilege escalation method not found: /opt/pb/bin/pbrun”}
c00417.mydom.com | FAILED! => {“failed”: true, “msg”: “Privilege escalation method not found: /opt/pb/bin/pbrun”}
c00418.mydom.com | FAILED! => {“failed”: true, “msg”: “Privilege escalation method not found: /opt/pb/bin/pbrun”}

Here is my cfg file … i did make a few changes trying to troubleshoot this

[defaults]

some basic default values…

hostfile = ./hosts
inventory = ./hosts
library = /usr/share/ansible
remote_tmp = $HOME/.ansible/tmp
pattern = *
forks = 20
poll_interval = 10
sudo_user = root
transport = ssh
remote_port = 22
module_lang = C

gathering = implicit

change this for alternative sudo implementations

#sudo_exe = sudo <<changed this
#module_name = shell <<changed this
#ask_sudo_pass= true <<changed this

executable = /bin/bash <<added this

the message changed when I made that change

#FAILED! => {“changed”: false, “failed”: true, “module_stderr”: “”, “module_stdout”: “/bin/sh: pbrun: command not found\r\n”, “msg”: “MODULE FAILURE”, “p arsed”: false}

SSH timeout

timeout = 3

[ssh_connection]

ssh arguments to use

Leaving off ControlPersist will result in poor performance, so use

paramiko on older platforms rather than removing it

ssh_args = -o ControlMaster=auto -o ControlPersist=1800s
#1800 seconds is 30min

It seems your pbrun executable is not found.

What does this return (if you run it for just one host foobar to test it):

$ ansible foobar -i myhosts -o -m shell -a 'which pbrun'

Johannes

Here is the frustrating thing as seen below. Logging directly in as myself clearly pbrun is found in the path, but ansible fails at the same.

This is an intersting point – how is the scope of the path set for the login session from ansible. it’s different from when I see directly.

FROM ANSIBLE
$ ansible all -i myhosts2 -m shell -a ‘which pbrun’
host1.mydom.com | FAILED | rc=1 >>
which: no pbrun in (/usr/lib64/qt-3.3/bin:/usr/local/maven-3.2.1/bin:/usr/local/bin:/bin:/usr/bin)

FROM LOCAL SHELL ON SAME HOST
$ ssh host1.mydom.com
Last login: Sun Apr 3 19:00:41 2016 from 16.87.0.70
$ which pbrun
/opt/pb/bin/pbrun
$

$ echo $PATH
/opt/krb5/sbin/64:/usr/lib64/qt-3.3/bin:/usr/local/maven-3.2.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/usr/X11R6/bin:/sbin:/usr/sbin:/usr/bin:/opt/pb/bin:/opt/perf/bin:/bin:/usr/local/bin:/home/corcharp/bin

So this really seem to be a matter of PATH. – i’m confused why i’m not getting the correct path

If i log into a host
$ echo $PATH

/opt/krb5/sbin/64:/usr/lib64/qt-3.3/bin:/usr/local/maven-3.2.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/usr/X11R6/bin:/sbin:/usr/sbin:/usr/bin:/opt/pb/bin:/opt/perf/bin:/bin:/usr/local/bin:/home/corcharp/bin

But from ANSIBLE
$ ansible all -i myhosts2 -m shell -a ‘echo $PATH’
host1.mydom.com | SUCCESS | rc=0 >>
/usr/lib64/qt-3.3/bin:/usr/local/maven-3.2.1/bin:/usr/local/bin:/bin:/usr/bin

In my cfg, i have executable=/bin/bash

login vs non-login shell?

http://stackoverflow.com/questions/27733511/how-to-set-linux-environment-variables-with-ansible

Benjamin

Interesting. I was not aware of this : <<Consider adding those environment variables in the .bashrc file. I guess the reason behind this is the login and the non-login shells .Ansible, while executing different tasks reads the parameters from a .bashrc file instead of the bash_profile or the /etc/profile.>>

so, I tried on just one host to add the absolute path i need in my $HOME/.bashrc, but still it is not working … I’m not very familiar with pbrun, and I’m not allowed to change the pbrun installation or configuration in any way in this environment.

$ ansible all -i myhosts2 -m shell -a ‘echo $PATH’
host1.mydom.com | SUCCESS | rc=0 >>
/usr/lib64/qt-3.3/bin:/usr/local/maven-3.2.1/bin:/usr/local/bin:/bin:/usr/bin:/opt/pb/bin

$ ansible all -i myhosts2 -o -m shell -a ‘uptime’ -b --become-method pbrun
host1.mydom.com | FAILED! => {“changed”: false, “failed”: true, “module_stderr”: “”, “module_stdout”: “usage: pbrun [-D level] -h | -K | -k | -V\r\nusage: pbrun -v [-AknS] [-D level] [-g groupname|#gid] [-p prompt] [-u user\r\n name|#uid]\r\nusage: pbrun -l[l] [-AknS] [-D level] [-g groupname|#gid] [-p prompt] [-U user\r\n name] [-u user name|#uid] [-g groupname|#gid] [command]\r\nusage: pbrun [-AbEHknPS] [-r role] [-t type] [-C fd] [-D level] [-g\r\n groupname|#gid] [-p prompt] [-u user name|#uid] [-g\r\n groupname|#gid] [VAR=value] [-i|-s] []\r\nusage: pbrun -e [-AknS] [-r role] [-t type] [-C fd] [-D level] [-g\r\n groupname|#gid] [-p prompt] [-u user name|#uid] file …\r\n”, “msg”: “MODULE FAILURE”, “parsed”: false}

$ pbrun -V
Sudo version 1.8.6p3
Sudoers policy plugin version 1.8.6p3
Sudoers file grammar version 42
Sudoers I/O plugin version 1.8.6p3

Maybe you shouldn't depend on the remote users environment and set the
important stuff explicitly:
http://docs.ansible.com/ansible/playbooks_environment.html

Benjamin

It looks like your recent run found pbrun. I recently worked with a client that used pbrun and here’s a brief walkthrough of what they did to fix it. Also, could you retry via a playbook (not ad hoc) and add -vvvv to the run.