Managing AAP Execution Nodes with SSH Access

We have a customer that is requiring we use their server as an execution node to manage our resources on their network, but our only access to it is from a remote desktop VDI on their network and the receptor port from our control plane. We have a management server that is outside the cluster we’re using to build the cluster itself, and adding the rest of the nodes is no problem with the regular inventory + setup.sh. I’ve floated the idea of reverse SSH and other creative solutions, but we’re getting told a hard no on SSH access here.

I’m not seeing any way to setup a host as an execution node and add it to the inventory without the setup.sh script. From my googles searches it looks like old versions of tower would let you add instances right from the web interface, but that functionality seems to be absent in current releases of AAP Controller (2.4-7.1 installed).

Based on the receptor the hardway forum article it seems like it is technically possible to do what I need. I’ve set up a dev cluster so I can hopefully hack together a solution using the ansible automation bundle, receptor, and some build instructions, but I’d appreciate any guidance on a better way to solve this or even an outline of what this might actually look like beyond me seeing a bunch of pieces that I’m hoping will fit together. This is all VM based and not using the Openshift/K8s stuff at all.

awx-manage provision_instance is the secret sauce. It gets the instance into Tower and from there you can get the receptor bundle.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.