I am trying to run an Ansible playbook from my host machine to a target server, with traffic routed through a proxy:
HOST -> PROXY -> TARGET
The challenge is that the proxy server requires an interactive login (e.g., confirming a push notification) every time I connect. No, I can’t use ssh keys … To work around this, I set up an SSH connection manually as follows:
The issue is that I have to pre-create a connection to each target server before running the playbook.
Is there a way to establish one connection to the proxy server and let Ansible handle creating connections to the target servers automatically? This would greatly simplify the process.
I set up all the agent forwarding, control channel, etc. in my ~/.ssh/config. When I login to my local workstation I execute
$ ssh-add ~/.ssh/id_rsa
which prompts for my ssh key passphrase and adds the key to my ssh agent. That does not help at all with my connection to the PROXY (to use your labeling). Establishing that connection requires a password and a 2nd factor login. But it also establishes the ssh channel through which I can jump to all the back-end or TARGET hosts.
The relevant parts of my ~/.ssh/config follow:
CanonicalizeHostname always
CanonicalDomains example.com it.example.com
Host *
ForwardX11 yes
ServerAliveInterval 180
ServerAliveCountMax 6
ConnectTimeout 15
Host localhost
CanonicalizeHostname no
# These hosts are local to my house,
# requiring only simple, local connections.
Host able baker charlie
Port 22
CanonicalizeHostname no
ProxyCommand none
PreferredAuthentications publickey
# This is the PROXY host through which
# ssh must jump to get to almost everything
# in the *.example.com domain.
Host proxy.it.example.com
ProxyCommand none
AddKeysToAgent no
ForwardAgent yes
ControlMaster auto
ControlPersist 4h
ControlPath ~/.ssh/controlpath/socket-%C
# These hosts at work don't require ssh to
# jump through the proxy.it.example.com host.
Host runescape1.example.com adventure.it.example.com
ProxyCommand none
# All other hosts at example.com require ssh
# jumps through the proxy.it.example.com host.
Host *.example.com
ProxyCommand ssh -W %h:%p proxy.it.example.com
The order of the sections above matters, as a more specific host section’s settings will override or add to settings from a previous, less specific host section. For example, any host specification can override or augment settings from “Host *”, so put it at the top.
After I’ve added my ssh key (with passphrase) to my ssh-agent, I just
$ ssh proxy.it.example.com
which prompts for my password and 2fa preferences. Then back on my local host I can say
Since all the back-end or TARGET hosts have my public key in ~/.ssh/authorized_keys I don’t have to think about passwords for the rest of the day. (Unless I use “-bK” to become and respond to the prompt for a sudo password.)
@utoddl You’re post is amazing, I have had a hard time finding any info for making this work at all. Thanks for sharing!
The one thing I have not gotten to work yet is making Ansible use the same ControlPath as specified in ssh_config. How did you manage this and can you by-chance also share your Ansible config / inventory.
You shouldn’t need to tell Ansible to do anything.
In Todd’s example, the control path for the proxy is defined at the proxy’s entry, and stays active for 4 hours. Then all hosts that use the proxy will subsequently use its control socket for the proxy jump. If you leave Ansible’s ssh config at its default setting, then Ansible should create a socket per inventory host, but each of those sockets will still route through the proxy’s socket automatically if their hostnames match the rules in the ssh config.
One of the neatest flags on the ansible-config dump command is --only-changed. I try to change only what I have to, and as a personal rule I never use ansible.cfg files other than /etc/ansible/ansible.cfg. If I need to override any defaults, I prefer to do it with environment variables.
As for our inventory, it contains nothing but hosts in groups. No connection information is permitted there by group consensus. (The flip side of that coin is that we’re lucky enough that we connect to all our hosts the same way and don’t need to mess up our inventory with variables for special cases.)
As you can see, there’s nothing there that affects connections.
It’s possible you have something set in your ansible.cfg that’s messing up your ControlPath. Check for that.
Heya folks, thank you for the quick responses. What you describe matched my intuition, but I had some additional config things going on in my inventory that caused this to happen and I did not put 2+2 together. Then I found another issue with Ansible not recognising my ssh_config, but I have got this sorted now and I am on my merry way. Thanks loads again! o/