I have a Question:
I use an Selfhosted GitLab for my Playbooks and the used user key etc. gets blocked when not logged in in the webgui of the gitlab for a certain time.
So my project fails with the message user is blocked.
So i thought imaybe could create an workflow where i execute an playbook after the Job failed like so:
And the playbook would need the data of the failed Project so i can make an filter why the Project failed. I dont think i could solve the Login Issue, but i could generate a message in our Chat where i have an AWX channel, with detiled message someone has to login to gitlab with user x.
The Playbook is not stored in the same Project at GitHub.
If you use the set_stats module in your playbook, you can produce results that can be consumed downstream by another job, for example, notify users as to the success or failure of an integration run. In this example, there are two playbooks that can be combined in a workflow to exercise artifact passing:
invoke_set_stats.yml: first playbook in the workflow:
---
- hosts: localhost
tasks:
- name: "Artifact integration test results to the web"
local_action: 'shell curl -F "file=@integration_results.txt" https://file.io'
register: result
- name: "Artifact URL of test results to Tower Workflows"
set_stats:
data:
integration_results_url: "{{ (result.stdout|from_json).link }}"
use_set_stats.yml: second playbook in the workflow
---
- hosts: localhost
tasks:
- name: "Get test results from the web"
uri:
url: "{{ integration_results_url }}"
return_content: true
register: results
- name: "Output test results"
debug:
msg: "{{ results.content }}"
Id like to create an “self cure” workflow with this project which executes the playbook when the Poject failes for a certain reason-th workflow can stop and retryif just for some reason the gitlab server is not reachable its just temporarly but i have an other playbook which checks an contaner status and when the serviceuser is blocked again i cannot execute any playbooks fom there and there are all my playbooks stored.
As far as I understand, because we’re exploring how to achieve something similar, this isn’t straightforward using AWX Workflows. It would be using almost any other alternative out there (Airflow, Ray, various ML oriented alternatives).
The Workflow Node doesn’t expose any kind of data. There is no return value or execution result to be propagated downstream of the flow. It is almost purely a Template Job/other orchestrator.
In other words, to achieve what you’d like, you’d have to implement severe coupling between the Node+Workflow+Recovery Node. Either with set_stats or with set_facts (and enable stored cache in the Template Job). This to facilitate job execution result propagation. i.e. data flow between nodes.
Needless to say, this is without looking at the AWX API. Which is an even more expensive approach.
Alternatively, which is what we’re doing, is contain the recovery responsibility in the same role (such as Ansible Block’s Rescue) and, additionally, have a Rollback Role to be executed, as a Job Template, On Failure by the Workflow. Which is also an ad-hoc Template Job to be executed manually, if need be.