I’m starting to run playbooks automatically via a push server. Basically emulating what you can get with Puppet. As part of that, I’d like to send the output of my cronjob playbooks into my ELK stack. Which is hard to do when the default output is so unreadable.
log_plays was designed to drop the json to syslog-ng which would then
push it to elastic search w/o need for logstash, probably easier to
just setup syslog-ng to do the same.
The default log_plays doesn’t actually output all the information I need.
I have lots of stuff going to syslog and then into ELK already, but in this case, I figured I’d just let logstash-forwarder watch the ansible log file. Then format the output so that logstash doesn’t have to filter it at all.
Currently, I figured out how to get valid json out per line. But I’m stuck figuring out how to get the task name, the role name, and the command line command information.
Are there global vars I can reference from the plugin? Where could I find a list of them?
Questions: The status of that restart apache 2 task should be “CHANGED”, since it actually did change during the play run. But it seems like the value for changed tasks is always “OK”. Is there a way to change that?
This post is quite old, but can you share your experience with me, because I’m trying to build something like you have done.
I’me trying to build environment that is able to track security policies applied at server level, and than I cold create pretty reports from Elastic/Kibana, and use it for generating inventory items.
Using given callback I’m able to log events from ansible to logstash/elastic search. But can’t search ansible_result field because it represent field as string, for instance:
“{“changed”:“false”, “msg”:“some message”}”.