Support for Proxying WinRm Calls

Hiya everyone!

I’ve got kind of a strange need and I’m pretty sure I can put together a PR to address it, but I wanted to pitch it here first.

I have a whole bunch of hosts in various datacenters that can only be accessed from within it’s own datacenter. In each of those datacenters we have interconnect boxes that can be used as a bastion for that datacenter. Ideally, I’d like to set up a central “Exector” host that could serve as the brains behind any remote orchestration/provisioning/config management. For Linux hosts, I just use an easy peasy SSH config to proxy through those bastions. For Windows, since WinRm uses HTTPS I can use nginx to proxy traffic to the correct machine. In my testing, that seems to work fairly nicely. We actually use nginx proxies to hit APIs of some of our internal products to work around this access issue, so I get to score points for using existing infrastructure to solve this problem.

The problem is, I’d like to minimize having to manage a whole bunch of CNames to point to these proxies (My team has to go through ANOTHER team for DNS stuff, so it’d be an extra cog I’d love to avoid). For our APIs, we have a model like:

http://some.proxy.address.com:XXXX/target_host_name.api

and that location (to use the nginx lexicon) is routed to the correct machine. I’d really like to do something similar with Ansible. I’ve tested that I can put an address in the inventory, and Ansible will attempt to run against it but it constructs the WinRm endpoint incorrectly. I end up with:

https://windows.proxy.name.com/target_host_name:5986/wsman

I’ve found, I think, where this endpoint is constructed and I think it would be fairly trivial to have it just place the port before the first found slash. But before I started hacking on it and put together a PR I wanted to know if anyone had a reason why it SHOULDN’T work this way or if I’m doing everything dumb and there’s a way better answer. If this isn’t a PR that would be accepted, I’d rather spend time on a different solution, as I don’t think my team would continue to maintain an Ansible fork.

So what do you think? Sound sane and reasonable or dumb and crazy?

Thanks for any input guys!

That sounds reasonable to me. v2 has added better support for connection
plugins to access host variables, so I could see a host variable such as
ansible_winrm_path being used by the winrm connection plugin to set a
custom path instead of /wsman. It's not implemented yet, but it's
something I've already been thinking about.

No objection to this but I would consider having ansible controllers as local as you can to the hosts it is controlling. I tried controlling hosts on a different continent but experienced timeouts and slower playbook runs. Things are running much more reliably for me with an ansible instance inside each datacenter.

For sure that would be preferable, but I haven’t been able to think of a simple way to do it without essentially using Ansible to run Ansible on my remote instances. We have a home grown tool that does that now that I’m trying to phase out, as we don’t really have the time/manpower to continue developing it. What would be rad is if we could have an “Ansible cluster”, something like how Salt handles this (A central Salt Master delegates tasks out to satellite Salt Masters), but I think that goes well beyond the scope of what Ansible wants to do.

My end goal is to set up something like Rundeck (company won’t pay for Tower :() and have a central self service portal for remote execution and code deploys, so I’d like to have the ability to kick stuff off from one spot. Most all of our datacenters are in the U.S, and that isn’t likely to change within the next year or two.

Although I suppose a solution would be having Rundeck SSH to all of our interconnect boxes and running Ansible from there… That might be a better solution.