I’ve got a machine that needs to have remote_tmp set to “$HOME/.ansible/tmp”. But this gives me issues with the rest of the boxes. So the option in the config file says “remote_tmp = /tmp”.
I’ve tried to set the option on the inventory file for this host only:
commando remote_tmp=$HOME/.ansible/tmp
and also as a host_variable in host_file_dir/host_vars/hostname:
Patches to add it as an inventory variable would be accepted (just apply to any group you need), but I’m not sure it really belongs as a playbook keyword.
Ok. So what are my options here? I cannot be the only person with a situation like this.
Diverging OS baseline installs is one of the reasons ansible is used after all?
Is there not any workaround?
There is not, which is not saying I’m unsympathetic.
Needing to specify remote temp is an infrequent thing, and not really a common OS divergence thing most people run into anymore. Most folks just pick a path that works, like $HOME/tmp.
I’m a bit curious why the $HOME related option didn’t work across the board? Does the user not have a homedir?
Well, if I set remote_tmp to the default I get the same error message as above in ~50% of my servers. Setting it to /tmp gives me issues with this single server.
Having close to 400 boxes, I’m prone to lean to the less damaging option. It somehow has to do with the fact that in many of those failing %50 boxes the home dir is a shared one through NFS. But it’s not granted either to get an error because the home dir is on nfs: it just has more probabilities of failing. (IE: haven’t figured out yet what the real issues is…)
Reading about this, is there not the possibility to have a conf file in the playbook dir? That would actually take precedence over the main one??
yes, /home is on nfs on some systems, but even if that raises chances of issues with ansible, it’s not conclusive. Some of the hosts that indeed have the /home shared do not show any issues. So there’s something going on, but haven’t yet figured it out.
no but system root or what ever user you are running ansible as (su, sudo, user) could have different permissions on an NFS mount than system permissions. root could be squashed, ids could be not mapped correctly
I’ve been looking at all that, but work gets in the way!
Right now I’m going to ignore that error in the lone box and get some things done. When I’m finished with the whole reorganisation, the issue most probably will have gone away…
anyway, thanks for the help! appreciated!
Just bumping this thread to let interested parties know I found the solution for this.
I had in .ansible.cfg this line:
`
ask_sudo_pass = True
`
Once that was removed all issues have disappeared.
Don’t really see why, but the fact remains: no problems whatsoever.
I’m guessing that somehow ansible’s behaviour changes in unexpected ways for me to see.
I connect through a user and then sudo to root. The first stage is done through ssh certs. No passwds there.
The second is a normal sudo.
If I add -K to the command line works flawlessly. with the setting on the .cfg file I get all kinds of weird behaviour that you can read on this thread.
Ok thanks if ask_sudo_pass in the config is causing --sudo, then this should definitely be a ticket. This implication we fixed in the CLI, but possibly not here.
I realize this is nearly 3 years old. However, I’ve run in to and finally diagnosed this issue (on my systems). The fuse code in the kernel makes a fuse filesystem only readable by the owning user…regardless of filesystem permissions. Not even root can override that.
It would be nice to be able to set remote_tmp on a per-server basis (e.g. in the .cfg file).