How to use jump/gateway hosts

Hi! I cannot find anywhere in the documentation how to use a gateway host (jump host, bastion host, etc). All of our hosts are inside an unreachable network with only one host reachable. We SSH to that host and then on to others.

I searched this mailing list, but I mostly found people talking about using ~/.ssh/config. That’s obviously not the Right Way To Do It – we should be sharing our configuration in the configuration management repository, not setting it in user-specific files. In some places people said they were using ansible’s SSH options to set a ProxyCommand or other way to accomplish a hop-through. but that also smells, because that’s a global setting as far as I can see. If you have more than one isolated network you’re going to use different gateway hosts.

This is a really common requirement – surely I’m missing something?

  • Baron

It’s fine to edit ~/.ssh/config and most people who need jumphosts are ok with that.

However it’s probably better to just log into your bastion host and run SSH from there.

Is it not possible to put the configuration in /etc/ssh/ ? (Just for the one domain)

ssh/config
Host *
ControlMaster auto
#ControlPersist yes
Host domain.net
ProxyCommand none
Host *.domain.net
ProxyCommand ssh domain.net -W %h:%p

Thanks,
Mark

this is what /etc/ssh/ssh_config file is for. As for specific domains:

Host *.particulardomain.com

Yep, it’s totally possible in Ansible.

If you were asking Baron, yep :slight_smile:

Hi!

Thanks for responding, everyone. And now that I’ve done some more searching, I’ve found this topic that already discusses the same thing, in the context of a pull request that was rejected.

With respect, I disagree with the Ansible project’s stance on this topic. Based on previous discussions such as the one I linked, I don’t expect this to change anything, but just to state my position succinctly and then let it be:

  1. The fact that machines must be accessed through a gateway is a vital part of the system’s configuration.
  2. Configuration should be contained entirely within the configuration management repository.
  3. The use case is demonstrated by broad existence of this functionality in a convenient form in other tools.
  4. It’s fine to say “there’s no true solution but here’s a workaround.” But if the response invalidates/rejects the use case, that’s discouraging.
    Regards,

Baron

You can use ansible to configure ~/.ssh/config in the localhost
machine, that is, in the machine that's doing the management of the
nodes. By doing it that way, maybe via lineinfile, you can ensure that
the configuration is contained entirely within the configuration
management repo as per your point 2.

J

Thanks for helping me understand.

I don’t think “in a convient form in other tools” really applies. The reason here is that none of these tools are decentralized, and this behavior in those tools also has to be configured.

In our case, the most flexible place to configure that if you wanted to is currently the SSH configuration file.

Now, that all being said, you wish to keep your configuration in version control. That’s fine. Now in most of those other tools, it’s usual to say, keep the configuration file for the tool itself along with the project. However, in this case, yes, it’s decentralized.

Should you choose to keep this in your configuration file, easily solve this problem.

A) set a “-f” in your ansible_ssh_options to point to the configuration file
B) check this file in alongside your playbooks as ansible.cfg

This will result in that SSH config being used every time.

I don’t believe we have ever invalidated the idea that it should be possible, and each time we’ve suggested how it can be done.

The crux of it is as follows – it doesn’t make sense to expose in Ansible an option for every option in OpenSSH. This bloats the system unneccessarily where it’s meant to be a minimal system – let OpenSSH do what it does best and support feeding arbitrary options to it.

Now, all being said, I recognize that it may not always be strategic to do this even by jump hosts. If running inside EC2, it is much nicer to run ansible inside of EC2, rather than funneling everything through the jump host.

It is for this reason that Ansible and/or AWX may likely support multiple workers at some point in the future, as we initially did way back when creating Func. But right now, most people will run their EC2 playbooks from a management node inside EC2, etc.

Michael,

Thanks for your response. Just a couple of things to add to previous points:

in my devops repo i have an ansible.cfg which points to the dir in same
repot inventory. I just run my ansible commands from that repo and it picks
up the config and inventory.

" Ideally, Ansible would look for a local config file in the current working directory, and if that doesn’t exist, go up to parent directories, and if all else fails look system-wide"

Actually it does something very much like this already

ansible.cfg is looked for alongside your playbook, then ~/.ansible.cfg, then /etc/ansible/ansible.cfg

I’m about to push out a docs update today that gives the config file it’s own chapter.

Perfect, thanks. I did not realize that. What about the host inventory
file? I was under the impression that had to be in /etc/ansible/hosts, or
mentioned explicitly in an env var or config setting. If that's discovered
in the CWD and parents too, then that's great.

Thanks,
Baron

You can specify it with -i or change the default inventory path in your config file.