Hi there,
i have a playbook which creates virtual machines via vmware and provisioned the OS by setting the IP, register to a foreman instance, installs basic packages and so one.
One step of this process is to join a Microsoft AD via the linux-system-roles.ad_integration role. Because the DNS record is created via the computer account I have to delegate the role, in the first run, to the IP of the host.
- name: "Join AD realm with delegate"
when: inventory_hostname != adclient_remote_host
ansible.builtin.include_role:
name: fedora.linux_system_roles.ad_integration
apply:
become: true
delegate_to: "{{ adclient_remote_host }}"
remote_user: "{{ adclient_remote_user }}"
The role “ad_integration” then tries to install missing packages via the “package” module (not using the FQCN). This failes with the message:
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: NoneType: None
fatal: [dmfapptst.falke.central → 172.20.141.113]: FAILED! => {“changed”: false, “msg”: “Could not find a module for {{hostvars[‘xxx.xxx.xxx.xxx’][‘ansible_facts’][‘pkg_mgr’]}}.”}
Which is a known issue with “package” when used with “delegate_to”: Cannot delegate to a host defined by a variable whose value is determined using ansible_facts, for package module · Issue #82598 · ansible/ansible · GitHub
In Ansible 2.18 this behaviour is fixed and I could confirm by temporary updating my environment. Unfortunately I have to support EL 7 hosts which python version is stuck to 3.6.8 so I need to kepp on using Ansible 2.16.
One solution is to keep all collections/roles local and patch all calls to “package” by hand to use yum/dnf. In my opinion this should be the last option to consider.
I had a look at the implementation of “package.py” and saw that there are no real dependencies to python >3.6.8 so it would be an option to backport this one action plugin to Ansible 2.16.
That what I did:
- Created ./plugins/action in my project
- placed patched package.py into ./plugins/action/
- set “action_plugins = ./plugins/action” in my ansible.cfg
When calling “package” from a local playbook the new patched version gets loaded but when called from a nested role the old “ansible.builtin.package” gets executed.
Now my question: Is there a chance I can overload the default plugin globally (in my environment - without patching anything in “~/” or “site-packages”) or is there another way to get around the mentioned issue?
Ansible prioritizes the builtin
module inside roles, so placing an action plugin in ./plugins/action/
does not override ansible.builtin.package
when a role explicitly calls it.
However, you can try placing the patched package.py
inside the action_plugins directory within the collection directory structure.
~/.ansible/collections/ansible_collections/ansible/builtin/plugins/action/package.py
This approach works because Ansible loads action plugins from the collection path before returning to built-in ones.
Hi Vijayakumar,
thanks for your suggestion. Unfortunately it didn’t work. I tried to place my custom package.py in the following locations:
- ./collections/ansible_collections/ansible/builtin/plugins/action/package.py
- ~/.ansible/collections/ansible_collections/ansible/builtin/plugins/action/package.py
(The first try was to make sure it gets distributed to other developers over the attached git repo)
my “package.py”, just for testing:
from __future__ import annotations
from ansible.errors import AnsibleAction, AnsibleActionFail
from ansible.executor.module_common import get_action_args_with_defaults
from ansible.module_utils.facts.system.pkg_mgr import PKG_MGRS
from ansible.plugins.action import ActionBase
from ansible.utils.display import Display
from ansible.utils.vars import combine_vars
display = Display()
class ActionModule(ActionBase):
TRANSFERS_FILES = False
BUILTIN_PKG_MGR_MODULES = {manager['name'] for manager in PKG_MGRS}
def run(self, tmp=None, task_vars=None):
""" handler for package operations """
raise AnsibleActionFail('This my exception.')
When I run the a basic test with the playbook:
- name: Test
hosts: all
gather_facts: false
tasks:
- name: Setfact
ansible.builtin.set_fact:
delegate_host: xxx.xxx.xxx.xxx
- name: Add IP Host
ansible.builtin.add_host:
name: "{{ delegate_host }}"
- name: Gather Facts
delegate_to: "{{ delegate_host }}"
delegate_facts: true
gather_facts:
- name: Install something
become: true
delegate_to: "{{ delegate_host }}"
package:
name: openssh-server
I don’t get the expected error which should be caused by the raised exception.
For complete information, here is my ansible.cfg:
[defaults]
host_key_checking = False
inventory = inventory/inventory.yml
ansible_managed = Maintained by Ansible - do not edit!
ask_vault_pass = True
Thanks for trying the suggested approach. Let’s try some alternatives.
Instead of overriding the global plugin, modify the role to use a patched module:
- Modify the Role to Use Your Own Module Instead of
ansible.builtin.package
In the ad_integration
role, replace:
- name: Install required packages
ansible.builtin.package:
name: "{{ item }}"
state: present
with:
- name: Install required packages
myorg.custom.package:
name: "{{ item }}"
state: present
- Create a Custom Collection to Host Your Patched
package
Plugin
collections/ansible_collections/myorg/custom/
├── plugins/
│ ├── action/
│ │ ├── package.py # Your patched version
│ ├── modules/
│ │ ├── package.py # (Optional, if needed)
├── MANIFEST.json
├── galaxy.yml
- Then, use
ansible-galaxy collection build
and install
it.
If modifying the role isn’t feasible, you can use set_fact
to force yum
or dnf
usage:
- name: Determine package manager
set_fact:
package_manager: "{{ ansible_facts['pkg_mgr'] | default('yum') }}"
- name: Install required packages using explicit package manager
command: "{{ package_manager }} install -y openssh-server"
become: true
delegate_to: "{{ delegate_host }}"
If modifying the role is impossible, try forcing the usage of your patched plugin using a custom plugin path:
[defaults]
action_plugins = ./plugins/action
Let me know how it goes.