Hello,
I’m trying to use OS based variables for a handler but it doesn’t work.
Here’s my configuration:
vars/Debian.yml
iptables_service_name: iptables-persistent
iptables_config_file: /etc/iptables/rules.v4
roles/common/tasks/main.yml
-
name: test value
debug: msg=“{{ iptables_service_name }} is the name of the service”
tags:
-
test
-
name: IPv4 firewall configuration
template: src=/etc/ansible/roles/common/
templates/iptables.j2 dest={{ iptables_config_file }} owner=root group=root mode=0644
notify:
-
restart {{ iptables_service_name }}
tags:
-
test
roles/common/handlers/main.yml
- name: restart iptables-persistent
service: name={{ iptables_service_name }} state=restarted
The output when I launch the playbook:
GATHERING FACTS ***************************************************************
ok: [172.16.100.211]
172.16.100.211: importing /etc/ansible/vars/Debian.yml
TASK: [common | test value] ***************************************************
ok: [172.16.100.211] => {
“item”: “”,
“msg”: “iptables-persistent is the name of the service”
}
TASK: [common | IPv4 firewall configuration] **********************************
changed: [172.16.100.211] => {“changed”: true, “dest”: “/etc/iptables/rules.v4”, “gid”: 0, “group”: “root”, “item”: “”, “md5sum”: “db47b56591cd1636563d51eeccb4c9e2”, “mode”: “0644”, “owner”: “root”, “size”: 700, “src”: “/root/.ansible/tmp/ansible-1390201288.3-106768121935641/source”, “state”: “file”, “uid”: 0}
ERROR: change handler (restart {{iptables_service_name}}) is not defined
In the task test value, the iptables_service_name is OK. But when I try to use it with my handler it doesn’t work.
Can somebody tell me what I’m doing wrong?
Regards,
So is vars/Debian loaded by include_vars?
If so, that’s the problem, as the name of the facts are a bit more global than something at inventory scope. The same handlers have to be assigned unformally to the task definition, so it can’t notify some on some OS and some others elsewhere.
This can be handled a bit roughly like so:
- module: foo …
notify:
- debian_handler
- redhat_handler
And in the handler
- name: debian_handler
service: name=bar state=restarted
when: ansible_os_family == ‘Debian’
etc
A usually better way is just
- name: restart the thing
service: name={{ service_name }} state=restarted
And just put the variable in the definition of the task, rather than name.
So is vars/Debian loaded by include_vars?
vars/Debian is included in my playbook:
vars_files:
- “vars/{{ ansible_os_family }}.yml”
If so, that’s the problem, as the name of the facts are a bit more global than something at inventory scope. The same handlers have to be assigned unformally to the task definition, so it can’t notify some on some OS and some others elsewhere.
This can be handled a bit roughly like so:
- module: foo …
notify:
- debian_handler
- redhat_handler
And in the handler
- name: debian_handler
service: name=bar state=restarted
when: ansible_os_family == ‘Debian’
etc
A usually better way is just
- name: restart the thing
service: name={{ service_name }} state=restarted
And just put the variable in the definition of the task, rather than name.
OK I tried the better way and it’s working now :). I have a common configuration and handler for my 2 OS.
roles/common/tasks/main.yml:
-
name: IPv4 firewall configuration
template: src=/etc/ansible/roles/common/templates/iptables.j2 dest={{ iptables_config_file }} owner=root group=root mode=0644
notify:
-
restart iptables
tags:
-
test
-
name: IPv6 firewall configuration
template: src=/etc/ansible/roles/common/templates/ip6tables.j2 dest={{ ip6tables_config_file }} owner=root group=root mode=0644
notify:
-
restart ip6tables
tags:
-
test
roles/common/handlers/main.yml
Thanks for your help.