Hello,
I’m using the latest awx version and setup logging to logstash with the following loggers:
“activity_stream”,
“job_events”,
“system_tracking”,
“broadcast_websocket”
This works fine so far.
My issue is that we are using workflow job templates. Within these templates a number of job and workflow job templates are beeing executed.
From all these workflow job templates and job templates I get logs, but only within their space. So lets say logs from job 1 doesn’t have the information that it belongs to workflow job 1 (the one we initially execute first).
My goal is to create dashboards of our workflow jobs in kibana which show what kind of workflow failed how many times and where for example.
But for that the jobs within the “core” workflow job need to know that they belong to that one.
Am I missing something? In awx I can just check every workflow and its jobs inside of it but why doesn’t that information get logged?
We’re logging to Splunk rather than logstash, but otherwise there’s not much difference (I’m guessing). But I do see our workflow job events are being logged by awx.analytics.job_events.
Are you sure your workflow events aren’t there? Look for events with a non-null workflow_job_id.
I’ve never looked for this specifically in our logs; I’ve always crawled around the AWX web gui. So I’m not sure what else to suggest.
Thats correct, but if there is a workflow within a workflow as in our example it overwrites this field. Also it does not log the name of the workflow anywhere in those logs it seems, except for one log at the beginning from another logger. Maybe this is some rare usecase?
The best solution I found which is no good solution to me is to use the guid and aggregate over all logs with that guid, add the workflowname and workflow job id in a separate field to all of them, which gets tricky aswell, when there are workflows within workflows. I just found that to be a bad solution since it doesn’t allow parallel processing. I also thought if the AWX gui can show me a whole workflow with all its tasks within the logs should be able to represent that aswell in some way.
I tried matching up logging events to results from /api/v2/workflow_jobs/. I don’t know where the AWX gui gets its network graph data from, but if all that information is being logged, it isn’t obvious to me how to reconstruct those graphs from the logged data.
Maybe you could periodically grab extra data from the API to inform your log-based dashboard building process?
Thanks for your input! I’ll checkout that on monday. I don’t want to reproduce the view of awx in Kibana. It’s more like I want to create a dashboard to show how many workflows with x name have been executed and what’s the succeed rate. Also if it failed, which substep/job/workflow caused the failure.
That way I can show for example out of 50 executions of x Workflow x% failed because of x reason to improve the automation process.
After further researching the API is actually a way to get the data I want. But I also saw that within the awx ee there are environmental variables like “awx_job_id” available.
So my solution approach would be a callback plugin to add the values of these variables and created fields to the ansible execution. In my testings it was not possible to add it to the “ansible-runner run -j” output (this should be the json output I receive in elastic for my logging) with my callback. Has anyone done this within an awx execution environment?