This is brought up in the async issue on github but I’ll bring it up
for discussion here, too.
Here is the use case:
ansible ‘*’ -n yum -a “update”
now - I would like to do this sort of job async, obviously, so if
something nukes the network connection the yum process doesn’t die
half-way through.
Sure.
So here is what I would like a perfect world to look like:
I run the job async and it returns to me a uniq job identifier - which
is ultimately the name of the dir that is storing the
stderr,stdout,status from the module that is running.
so I can connect later and run:
ansible ‘*’ -n jobs -a “myjobid”
and get back stdout, stderr and status (if complete)
or get reconnected waiting for the job to complete.
Any zany thoughts on this?
Ok, thinking out loud here.
Yeah, I agree with your example above, we want all modules to support async, not just one of them, and don’t want to code it into each.
So probably we need to deploy a wrapper module also, behind the scenes, when the setup module is run.
The wrapper module should support “status=N”
and launch “name=N args=args”
and we teach runner some syntactic sugar so that people don’t know the wrapper module even exists. For instance, the copy
and template modules (which will NEVER need async) do special things to pretend they are pure modules when really there
are some extra steps involved.
So that really the SSH command we exec when --async is used is INSTEAD of
module
and is this:
async_wrapper --timeout=N --module=N
I looked halfway at implementing this with screen believe it or not but
then decided that might be too error/weirdness prone.
What if the module just returns the pid of the fork and wrapper status just checks on the pid, then saves the output in /blah/whatever//(stdout, stderr) ?
This seems to imply the ansible (non playbook) syntax would be:
ansible --async -n foo -a args
and to check status
ansible -n async_status
But hmm, ansible is about making things happen in parallel… so this implies that the invocation of async should (on the server side, automagically) pick a job UUID for you, so it’s the same on all hosts.
Maybe something simpler than the above, but I can see it working and not being too complicated