I have replaced “awx_task_hostname=awx” to “awx_task_hostname=myhostname” in the inventory file and performed installation. Now I can see myhostname in “Instance Groups” under “tower”, but when I try to launch job templates they just show “Pending” status. If I revert back to “awx_task_hostname=awx” then everything works fine.
awx_task_hostname option in inventory file https://github.com/ansible/awx/blob/049d642df8ea6d593448d
More information from the original post:
We’re trying to figure out how to identify the host that AWX is running on from the API. The use case is if multiple, single-instance AWX instances are behind a load balancer.
It seems as if using the response header “X-API-Node” would be right but it’s defaulted to “awx”; it seems that this might be set with “awx_task_hostname”, however, this doesn’t seem to work.
Regards,
David Reno
How are you deploying this?
This is going to be a combination of Docker container hostname and what gets provisioned as part of the initialization script: https://github.com/ansible/awx/blob/devel/installer/roles/image_build/files/launch_awx_task.sh#L25
X-API-Node is just going to return the Instance the web request responded on… that particular bit is more useful in cluster-based installs which it sounds like you aren’t doing? Putting standalone awx deployments behind a single load balancer is… strange.
You’re treading into territory that is a bit outside of our supported scope… it seems like you are trying to set up a cluster without configuring the systems as a cluster and that’s not going to work the way you think it’s going to.
Matt,
Fair points; my deployment intent is clearly beyond any designed use case. For scale, the designed solution seems to be AWX clustering with a single instance postgres database. My concerns are the scale, availability and network latency of the backing database as well as AWX upgrade. Since I can’t account for the db limits and what could go wrong during upgrade, my thought was to replace that with things I could account for, largely synchronization of job templates across many single instance AWX deployments. The scale involved here is playbooks of 800 tasks running on an inventory of 25k that must complete in about a week or so. The geography is across the US.
The thought of knowing I’ve distributed the problem and can solve scalability by simply adding stand-alone nodes is seductive. It also addresses my concern of rolling-upgrades to the AWX system (kill/build). To make it work, I’d need to make an API call to launch a template, I’d need to leverage the named_url feature since I can’t rely on template id, I’d need some method of identifying the host that it runs on to check status (web, task, db would all be on the same host). I wouldn’t care about long term reporting/status, I’d scrape success/failure and store it externally. This is work (but assuming I can get the hostname to make the status call) seems to be fully within the design parameters of AWX.
Not trying to create something too complex or brittle but more looking to account for scalability and de-risk concerns around upgrades and dealing with things that break.
Sincerely,
David