Hello. I have two different playbooks. Each refreshes an Oracle database from another. Different sets. Unrelated. I have coded it sequentially to begin with:
1. Refresh A from B;
2. Refresh D from C;
They run fine, but take hours, each. I need to launch them simultaneously to shorten the total process timing. Since one refresh has nothing to do with the other, they may run concurrently, instead of sequentially.
1. Refresh A from B + Refresh C from D;
But how do I launch two different playbooks or tasks at the same time within Ansible’s functionality? I could put them both in LINUX cron with an identical start time, but I would rather do this from within Ansible.
Since the two refreshes are completely independent, the simplest approach is to just run both ansible-playbook commands in parallel from a wrapper script:
The & backgrounds each command and wait blocks until both finish. Total time becomes the length of the slower one rather than both added together.
If you want to keep everything in a single playbook, you can use async with poll: 0 to fire and forget each task, then poll for completion later:
- name: Start refresh A from B
command: /path/to/refresh_a.sh
async: 14400
poll: 0
register: refresh_a
- name: Start refresh D from C
command: /path/to/refresh_d.sh
async: 14400
poll: 0
register: refresh_d
- name: Wait for refresh A
async_status:
jid: "{{ refresh_a.ansible_job_id }}"
register: result_a
until: result_a.finished
retries: 480
delay: 30
- name: Wait for refresh D
async_status:
jid: "{{ refresh_d.ansible_job_id }}"
register: result_d
until: result_d.finished
retries: 480
delay: 30
Set async high enough to cover your longest expected run time (14400 = 4 hours). The poll: 0 means Ansible starts the task and moves on immediately. The async_status tasks then wait for each job to complete.
The bash wrapper approach is generally more straightforward for independent playbooks, but the async approach is useful if you need Ansible to track the results and handle errors within the same run.
@RianKellyIT beat me to it, but there are some caveats. If you run with async tasks with poll: 0, then you need to clean up Ansible’s async job cache.
Here’s an example playbook I was working on. Replace the job bodies for your refresh tasks. And for long-running jobs, you don’t need to check the status every 10 seconds like I did below for this <50 second demo. But all the relevant bits are here. Enjoy.
---
- name: Two async tasks
hosts: localhost
gather_facts: false
vars:
ten_hours: 10:00:00 # in seconds. But see https://forum.ansible.com/t/45545
# for why this is a terrible way to spell seconds.
tasks:
- name: First async job (25 seconds)
ansible.builtin.shell: |
echo "Task 1 starting at $(date)."
sleep 25
echo "Task 1 ending at $(date)."
async: "{{ ten_hours }}" # Up to 10 hours in seconds
poll: 0 # Don't poll
register: job1
- name: Second async job (45 seconds)
ansible.builtin.shell: |
echo "Task 2 starting at $(date)."
sleep 45
echo "Task 2 ending at $(date)."
async: "{{ ten_hours }}" # Up to 10 hours in seconds.
poll: 0 # Don't poll
register: job2
- name: Wait on first job to finish
ansible.builtin.async_status:
jid: "{{ job1.ansible_job_id }}"
register: job1_result
until: job1_result is finished
retries: "{{ ten_hours / 10 + 1 }}" # retries * delay should be >= async from job 1
delay: 10
- name: Wait on second job to finish
ansible.builtin.async_status:
jid: "{{ job2.ansible_job_id }}"
register: job2_result
until: job2_result is finished
retries: "{{ ten_hours / 10 + 1 }}" # retries * delay should be >= async from job 2
delay: 10
- name: Cleanup async job cache for both jobs
ansible.builtin.async_status:
jid: "{{ item }}"
mode: cleanup
loop:
- "{{ job1.ansible_job_id }}"
- "{{ job2.ansible_job_id }}"
- name: Show both job status
ansible.builtin.debug:
msg:
Job1Result: "{{ job1_result }}"
Job2Result: "{{ job2_result }}"
If you don’t clean up the async job cache then you’ll accumulate orphaned files there. The document section Run tasks concurrently: poll = 0 contains this note:
When running with poll: 0, Ansible will not automatically clean up the async job cache file. You will need to manually clean this up with the async_status module with mode: cleanup.