Ansible Playbook Through Proxy with Interactive Login

Hello,

I am trying to run an Ansible playbook from my host machine to a target server, with traffic routed through a proxy:

HOST -> PROXY -> TARGET

The challenge is that the proxy server requires an interactive login (e.g., confirming a push notification) every time I connect. No, I can’t use ssh keys … To work around this, I set up an SSH connection manually as follows:

    ssh -f -N \
        -o ControlMaster=auto \
        -o ControlPath=~/.ssh/sockets/%r@%h:%p \
        -o ControlPersist=1h \
        -o ForwardAgent=yes \
        -J "$JUMP_USER"@"$JUMP_SERVER" \
        "$TARGET_USER"@"$TARGET_SERVER"

and then I reuse this connection like this

    ansible-playbook \
        playbooks/"$PLAYBOOK" \
        --user "$TARGET_USER" \
        --ask-become-pass \
        --ssh-common-args \
            "-o ControlMaster=auto \
            -o ControlPath=~/.ssh/sockets/%r@%h:%p \
            -o ControlPersist=1h \
            -o ForwardAgent=yes \
            -J "$JUMP_USER"@"$JUMP_SERVER""

The issue is that I have to pre-create a connection to each target server before running the playbook.

Is there a way to establish one connection to the proxy server and let Ansible handle creating connections to the target servers automatically? This would greatly simplify the process.

I set up all the agent forwarding, control channel, etc. in my ~/.ssh/config. When I login to my local workstation I execute

$ ssh-add ~/.ssh/id_rsa

which prompts for my ssh key passphrase and adds the key to my ssh agent. That does not help at all with my connection to the PROXY (to use your labeling). Establishing that connection requires a password and a 2nd factor login. But it also establishes the ssh channel through which I can jump to all the back-end or TARGET hosts.

The relevant parts of my ~/.ssh/config follow:

CanonicalizeHostname always
CanonicalDomains example.com it.example.com

Host *
  ForwardX11            yes
  ServerAliveInterval   180
  ServerAliveCountMax     6
  ConnectTimeout         15

Host localhost
  CanonicalizeHostname no

# These hosts are local to my house,
# requiring only simple, local connections.
Host able baker charlie
  Port                     22
  CanonicalizeHostname     no
  ProxyCommand             none
  PreferredAuthentications publickey

# This is the PROXY host through which
# ssh must jump to get to almost everything
# in the *.example.com domain.
Host proxy.it.example.com
  ProxyCommand   none
  AddKeysToAgent   no
  ForwardAgent    yes
  ControlMaster  auto
  ControlPersist   4h
  ControlPath    ~/.ssh/controlpath/socket-%C

# These hosts at work don't require ssh to
# jump through the proxy.it.example.com host.
Host runescape1.example.com adventure.it.example.com
  ProxyCommand none

# All other hosts at example.com require ssh
# jumps through the proxy.it.example.com host.
Host *.example.com
  ProxyCommand ssh -W %h:%p proxy.it.example.com

The order of the sections above matters, as a more specific host section’s settings will override or add to settings from a previous, less specific host section. For example, any host specification can override or augment settings from “Host *”, so put it at the top.

After I’ve added my ssh key (with passphrase) to my ssh-agent, I just

 $ ssh proxy.it.example.com

which prompts for my password and 2fa preferences. Then back on my local host I can say

$ ansible-playbook playbook_zero.yml --limit=hostgroup_zero -vv

Since all the back-end or TARGET hosts have my public key in ~/.ssh/authorized_keys I don’t have to think about passwords for the rest of the day. (Unless I use “-bK” to become and respond to the prompt for a sudo password.)

2 Likes

@utoddl You’re post is amazing, I have had a hard time finding any info for making this work at all. Thanks for sharing!

The one thing I have not gotten to work yet is making Ansible use the same ControlPath as specified in ssh_config. How did you manage this and can you by-chance also share your Ansible config / inventory.

Thanks loads again!

2 Likes

You shouldn’t need to tell Ansible to do anything.

In Todd’s example, the control path for the proxy is defined at the proxy’s entry, and stays active for 4 hours. Then all hosts that use the proxy will subsequently use its control socket for the proxy jump. If you leave Ansible’s ssh config at its default setting, then Ansible should create a socket per inventory host, but each of those sockets will still route through the proxy’s socket automatically if their hostnames match the rules in the ssh config.

2 Likes

Yeah, what he said!

One of the neatest flags on the ansible-config dump command is --only-changed. I try to change only what I have to, and as a personal rule I never use ansible.cfg files other than /etc/ansible/ansible.cfg. If I need to override any defaults, I prefer to do it with environment variables.

As for our inventory, it contains nothing but hosts in groups. No connection information is permitted there by group consensus. (The flip side of that coin is that we’re lucky enough that we connect to all our hosts the same way and don’t need to mess up our inventory with variables for special cases.)

Here’s all the config that affects me:

(venv-python311-ansible-core-216) [utoddl@ssh tower]$ ansible-config dump --only-changed
CACHE_PLUGIN(/etc/ansible/ansible.cfg) = memory
COLLECTIONS_PATHS(env: ANSIBLE_COLLECTIONS_PATH) = ['/home/utoddl/.ansible/collections']
CONFIG_FILE() = /etc/ansible/ansible.cfg
DEFAULT_FORKS(/etc/ansible/ansible.cfg) = 5
DEFAULT_GATHERING(env: ANSIBLE_GATHERING) = smart
DEFAULT_HOST_LIST(env: ANSIBLE_INVENTORY) = ['/home/utoddl/tower/mw-ansible-defaults/inventory/hosts']
DEFAULT_LOG_PATH(env: ANSIBLE_LOG_PATH) = /var/log/ansible.log
DEFAULT_POLL_INTERVAL(/etc/ansible/ansible.cfg) = 15
DEFAULT_STDOUT_CALLBACK(env: ANSIBLE_STDOUT_CALLBACK) = yaml
DEFAULT_TIMEOUT(env: ANSIBLE_TIMEOUT) = 30
DEFAULT_VAULT_IDENTITY(env: ANSIBLE_VAULT_IDENTITY) = …elided…
DEFAULT_VAULT_IDENTITY_LIST(env: ANSIBLE_VAULT_IDENTITY_LIST) = […also_elided…]
DISPLAY_SKIPPED_HOSTS(env: ANSIBLE_DISPLAY_SKIPPED_HOSTS) = True
EDITOR(env: EDITOR) = ne
HOST_KEY_CHECKING(env: ANSIBLE_HOST_KEY_CHECKING) = False
INTERPRETER_PYTHON(env: ANSIBLE_PYTHON_INTERPRETER) = auto
RETRY_FILES_ENABLED(/etc/ansible/ansible.cfg) = False
TRANSFORM_INVALID_GROUP_CHARS(env: ANSIBLE_TRANSFORM_INVALID_GROUP_CHARS) = silently

As you can see, there’s nothing there that affects connections.
It’s possible you have something set in your ansible.cfg that’s messing up your ControlPath. Check for that.

1 Like

Heya folks, thank you for the quick responses. What you describe matched my intuition, but I had some additional config things going on in my inventory that caused this to happen and I did not put 2+2 together. Then I found another issue with Ansible not recognising my ssh_config, but I have got this sorted now and I am on my merry way. Thanks loads again! o/

3 Likes