Notified handler, but skipped

I’m having some trouble having a handler executed. It is notified but skipped. Is there any particular reason why a handler is notified but yet skipped? I mean, from my point of view, if a handler is notified means something changed and it needs to be notified and executed no matter what still if it is only executed once at the end of the play, but why should it ever be skipped?

I am using ansible 1.4.3 on a ScientificLinux 6.5 .

I’m using conditional roles and the same handler exist in previous skipped roles:

- hosts: all
roles:
- id_target
- common
- { role: nginx-install, when: var == ‘2’ }
- { role: nginx-config, when: varp == ‘2’ }
- { role: kibana-central, when: var == ‘3’ }
- { role: elasticsearch-install, when: var == ‘1’ }
- { role: elasticsearch-config, when: var == ‘1’ }
- logstash-base-shippers

The role “elasticsearch-config” have a handler to restart elasticsearch. While running the play “var” == 1, so at the configuration change the handler should be notified, and it is, and applied , but it is not.
The same handler exist on the roles “kibana-central” , which is skipped and “elasticsearch-install”.

This is what I get:
[…]

TASK: [java-install | Install OpenJDK] ****************************************
changed: [localhost]

TASK: [elasticsearch-install | Install Elasticsearch] *************************
changed: [localhost]

TASK: [elasticsearch-install | Ensure ElasticSearch is running] ***************

ok: [localhost]

TASK: [elasticsearch-config | Copy elasticsearch.yml] *************************
changed: [localhost]

TASK: [java-install | Install OpenJDK] ****************************************
ok: [localhost]

TASK: [logstash-install | Install Logstash] ***********************************
changed: [localhost]

[…]

NOTIFIED: [elasticsearch-install | restart elasticsearch] *********************
skipping: [localhost]

NOTIFIED: [supervisor-install | restart supervisord] **************************
changed: [localhost]

NOTIFIED: [logstash-base-shippers | reload supervisord] ***********************
changed: [localhost]

PLAY RECAP ********************************************************************
localhost : ok=38 changed=21 unreachable=0 failed=0

Note:

NOTIFIED: [elasticsearch-install | restart elasticsearch] *********************
skipping: [localhost]

Thanks!

Given you have a “when” above, there’s a 99.99999% chance that’s what’s happening.

If you tag a role with “when”, all the tasks in it are applied with the same condition.

None of the tasks would fire, or the handler.

If you want to define a common handler, don’t define it with a “when” condition.

It may be that you are overriding a handler with the same name in one of those roles.

The other possibility is there is a “creates=” or “removes=” on a handler that used a command/shell module.

Thanks Michael,

I supposed it was that (the when), but yet it seems strange to me the fact that a handler can be notified and yet skipped.
Anyway, I changed the play and I like it much more.
I just moved all handlers to one separate file and included that file at the start of the play. Now not only works perfectly but also I think is a better aproach.

So that was caused by the “when”.
I am using "when"s because I want to have only one playbook and apply the roles always depending on the machine type. Is there any other way to do this?

Thanks again.

"I am using “when"s because I want to have only one playbook and apply the roles always depending on the machine type. Is there any other way to do this?”

Yep! Use groups in that one playbook, and have more than one play:

  • hosts: common
    roles:

  • common

  • hosts: webservers
    roles:

  • webservers

  • hosts: dbservers
    roles:

  • dbservers

This is much cleaner on output as well.

I am using the groups aproach in some cases, but I can’t use it in pull-mode, can I? The playbook I wrote (with the "when"s) is to be used in pull-mode.

Thanks Michael.

I’d recommend just using push for 99.547% of everyone out there.

You not only get much better logging output, but can do higher levels of orchestration, and we’ve got plenty of folks using it in very large infrastructures.

Alternatively, you can set up different repos for each of your projects and check out common role content in a known location.

Yes, different repos sounds good, but yet it might be a bit more work maintaining the code. Anyway I’ll give it a try.

Is there any way to deal with autoscaling in a push mode? If I am using pull mode is only because of the autoscaling…

Thanks Michael.

“Is there any way to deal with autoscaling in a push mode? If I am using pull mode is only because of the autoscaling…”

Tower has an excellent autoscaling callbacks feature for this.

The node can phone out and say “configure me”, and it will reach out and configure just that node.

It can be enabled and disabled on a per-job-template basis.