I’m currently just getting the hang of awx. I’m struggling to find the best solutions for an workflow I have in mind, please bare with a beginner.
I have developed an inventory plugin which filters hosts from an inventory source based on the value of an environment variable.
I want to create an workflow template composed of:
a pipeline which spawns up the infrastructure, creates the hosts
a second pipeline which grabs the name of these hosts using my custom inventory plugin
My issue is in building the inventory in the second pipeline. Would it be possible to parameterize the environment variable value so it’'s somehow being filled up with a variable exported during the first pipeline’s run (via set_stats)?
Is there a simpler way to directly exchange artifacts between pipelines (can i for example export an inventory file from the first job and import it into the second)?
I had a look over the Smart Inventories feature and they seem really cool. Unfortunately, they do not seem to be the solution for me, please correct me if I got it wrong:
The 1st pipeline creates new hosts in the infrastructure and pushes information about these to a DB (hostname and some cluster identification data), so the hosts were not part of any previous inventory at this point
In the 2nd pipeline, the custom inventory plugin is invoked (via an environment variable which points to the new cluster’s ID, unique for this new cluster and generated during the first pipeline’s run) and this queries the DB filtering hosts based on that environment variable
If I get the smart inventory feature correctly - " it allows you to generate a new Inventory that is made up of hosts existing in other Inventory in Ansible Tower", but that’s not the case for me, sorry for missing some details in the first post. And even the first pipeline is run against localhost, so no posibility to set some facts there.
Currently, I am able to set the environment for the second pipeline, exporting the cluster_id from the first one and using it, but I cannot alter the Tower Inventory object’s envvar, event though the custom script seems to be executed once again when the 2nd pipeline is invoked.
I had a similar issue a few years back. I had a workflow that created a VM in vSphere, but to do any follow on actions in that VM, it had to exist in the inventory.
Initially, I was going to do an inventory sync to vmware, but that was taking forever, like 45 minutes. This was like two years ago, so I don’t know if the vmware inventory plugin ever got any better.
Next, I ended up writing a custom inventory script. After the VM got created, it would simply write the hostname to a file on the tower host. Then, my inventory script would run and basically ingest that file and add them to inventory. That worked for a while.
I ended up scrapping the custom script and just adding a ‘tower_host’ to my workflow, between the VM creation and the steps where it starts configuration:
Thanks a lot for the nice ideas you gave me here! I really appreciate your inputs!
My current issue is more complicated as I am trying to implement a workflow which works both from awx and locally, from command line if needed.
I have implemented in the custom inventory script the posibility to define the filter for searching hosts in the Inventory DB not only by environment vars, but also a variables override file. This unfortunately needs to be an absolute path, and it will be created during the workflow and deleted from the system at the end of it (no matter if it failed or passed). This way, at the start of the second job, the inventory call would get the new filter. It’s not the most elegant solution, but it would have to work for now.