Notify a task

Yeah I realize it's a *bit* redundant to have both

service name=foo state=restarted (as a handler)

And also

service name=foo state=started enabled=yes (as a task)

However, we do not simply have tasks that are notified without
parameters, like Puppet does, in our system, a notify does not trigger
an explicit action every time (like restart), it can do whatever you
want -- in other words, *all* of our modules support notification.
I can just as easily notify a task to *stop* a service, or transfer an
additional config file -- though those cases won't come up much.

This is the difference that makes things work like they work now, and
that's not going to change. Basically we're not missing any
capabilities, the syntax is just a bit different.

One of the coolest things we can do really is share handlers between
all plays and playbooks:

handlers:
   - include: tasks/handlers.yml

And only define them once. So, while, yes, you must define "restart
sshd" as a handler seperately from the task to ensure it is
enabled/running, it's a more flexible system, and once defined, you
can reuse it between files.

So basically this isn't going to change.

To clarify this part:

"Once the firewall is started (task), ansible gets disconnected before
getting to the ssh-restart handler. Outcome: ssh still listening on
the old port, while the firewall permits only the new port - and I'm
locked out :("

This will be solved with https://github.com/ansible/ansible/issues/784 ...

Fair enough. I understand your argument for separating handlers from tasks.

If I understood correctly, I don't think that issue will resolve what happens here.
The issue addresses the case where ansible decides to stop executing the remaining handlers.
What is happening in my case is that ansible gets disconnected from the remote host by the firewall, and it's beyond ansible's control to stay connected. So even if ansible tries to continue, it will not be able to. Ansible is not stopping purposefully, it is interrupted by loss of connectivity.

So yeah, can't help you there, and this wouldn't help you either.

The point I'm trying to make is that the cause of this issue is that ansible does not give enough control over the order of executing tasks and handlers.

It forces all tasks to be executed before any handlers.

The relevant parts in this example are the ssh task (conf. file), a related handler (restart ssh), and a firewall task (start firewall).

The last task (start firewall) interrupts ansible, so it needs to be last. But ansible forces the middle action (restart ssh handler) to be last.

I am just looking for a clean solution for a real setup. It's not a hypothetical situation, so there could perhaps be other examples where a handler needs to be executed before a task.

The best solution I could think of is to split the playbook into two playbooks: the first includes the ssh task and handler, while the second includes the firewall actions. But this effectively moves the responsibility of fixing the issue to the user - manually, instead of having the tool take care of it - which kind-of defeats the purpose, a little bit!

So your original request is a bit different than what I interpreted it
to be. You don't want to notify tasks, you want to be able to
immediately notify handlers in some cases instead of at the end of the
game.

This feels like a task attribute like "notify_policy: immediate" or
something to a task object.

I think the easiest thing for you to do is not do a notify and just
have it as a task and always bounce the thing in question.

OK. Here is a less-manual but a little uglier solution..

The last task would look like this:

- name: Ensure firewall is running
  action: shell shorewall status || (echo "service shorewall start" | at now + 1 minutes)

(shorewall is an iptables frontend. The only way to find out whether it's running is by running 'shorewall status')

This would give the handler some time to execute before this task is actually executed.

Can anyone think of a better way? or an ansible current or prospective feature that can make this better?

I've thought about this case a fair bit. Where you need to do something which will make ansible access temporarily impossible to complete/monitor the rest of the playbook.

Something Michael and I have discussed before but never got anywhere with was to be able to do something like:

connect to targethost with a playbook.
Feed it the playbook, parse out all the possible modules + files + variable-expansions the targethost will need (in short fully populate the playbook for that host).

Stuff all of that info over the connection. Then, using async, tell it to run it and record the results.

It's sorta like ansible-pull - but w/o the need to expose your entire set of configs/playbooks/variables in a semi-public git repo.

I guess the other option would be to stuff a shallow git clone over the wire and run ansible-pull from there.

it's not super-lightweight but maybe there is a way to make more cleverly constructed.

maybe something like:

- name: ansible pull-and-run
   action: pull src=/path/to/some/configs playbook=/path/to/pb/to/send
   async: true

of course none of that exists now - but the pull module would need to suck up a lot of local files, make them into a git repo or just a tarball an stuff them across the wire using copy. Then once on the other end unarchive them and run the playbook? I guess you'd need to include any/all modules, too.

Like I said - it gets tricky to compile everything you need to bootstrap yourself.

-sv

IDK.

Generally one of the things you hesitate to do over config management is not do anything that takes away your ability to configure it.

Fair enough. I understand your argument for separating handlers from tasks.

To clarify this part:

“Once the firewall is started (task), ansible gets disconnected before
getting to the ssh-restart handler. Outcome: ssh still listening on
the old port, while the firewall permits only the new port - and I’m
locked out :(”

This will be solved with https://github.com/ansible/ansible/issues/784

If I understood correctly, I don’t think that issue will resolve what happens here.
The issue addresses the case where ansible decides to stop executing the remaining handlers.
What is happening in my case is that ansible gets disconnected from the remote host by the firewall, and it’s beyond ansible’s control to stay connected. So even if ansible tries to continue, it will not be able to. Ansible is not stopping purposefully, it is interrupted by loss of connectivity.

I’ve thought about this case a fair bit. Where you need to do something
which will make ansible access temporarily impossible to complete/monitor
the rest of the playbook.

Something Michael and I have discussed before but never got anywhere with
was to be able to do something like:

connect to targethost with a playbook.
Feed it the playbook, parse out all the possible modules + files +
variable-expansions the targethost will need (in short fully populate the
playbook for that host).

Stuff all of that info over the connection. Then, using async, tell it to
run it and record the results.

It’s sorta like ansible-pull - but w/o the need to expose your entire set
of configs/playbooks/variables in a semi-public git repo.

At some point we have to say Ansible’s choices to make some things easy and simple (and that means implementation) mean some things are hard, and require
workarounds, rather than modifying Ansible to support them.

Identifying what things to xfer and just moving only those things is technically possible, but also large.

One way to assure things that need to be atomic absolutely get done is just to write a module for those things.

I’m theorizing any sort of network config that can sever our ability to talk to the box probably should be done within a module call, not across multiple module calls, either that
or you should use ansible-pull.

I am not really interested in the “bundle up all files and run playbooks end to end” thing, because Ansible was designed for multi-node deployment where facts about other systems
are useful, and things can be done in well ordered steps across those servers. This means doing things at the same time, pretty much.

In the above case, having a module to configure the firewall probably wouldn’t be terrible.

I agree that it is potentially hurky. Then again - the idea of having a module for ansible which does the equivalent of an ansible-pull but with a pushed 'tree' is somewhat appealing.

it would definitely need its own module - but there's no obvious need for it to be in ansible upstream - considering what it would be doing.

I'll have to think more of the pain implications.

-sv

I have found that 'notify_policy' feature useful too in a couple of
situations, and I suspect that there are many cases in which someone
would need to run a handler after the notifying task and not at the end.

I find the current implementation of handlers restrictive in that it
does not allow you to fully control the order of the handlers inside the
whole play. Even a 'notify_policy' feature, although it would allow you
to run the handler immediately after the notifying task, it would not
allow you to run the handler after/before a _specific_ task. To handle
that case too, instead of having a 'notify_policy' feature we could give
tasks a 'handler' attribute that when set to True, it would behave like
a handler: run only if having been notified by another task.