I’ve created a proposal on introducing execution plugins in ansible/proposals. I’d like to start this thread as a (secondary?) discussion place, maybe more on a philosophical level?
I’ve been toying around with the idea for a longer time, at least since `ansible-playbook` (`ansible-core 2.17.1`) fails on target with `python 3.9`. Before I’ve been thinking of how to do modules in other languages such as Rust or Golang without having to deliver compiled binaries. My idea back then: have an action plugin that compiles the code and then tells ansible-core to execute the binary module.
This could in principle also be done for Python code that shouldn’t use ansible-core’s module_utils framework (would likely need some more work, but essentially you’d have an action plugin that produces its own AnsiballZ and lets ansible-core treat it as a “binary module”).
But (ab-)using action plugins for this doesn’t feel that great either. That’s why I was thinking that this process should be formalized, resp. the part of ansible-core that currently does this made pluggable.
Moving to have a more explicit ‘execution subsystem’ is something core has considered and discussed for a long time, specially with views into making the POSIX/windows divide a lot clearer. I’m not sure that exposing these as plugins would get us there as these are very complex systems that must work very reliably and integrate very well with Ansible’s internals.
I would point at strategy plugins not delivering the slew of options they could as the huge complexity involved makes it daunting to change the execution grouping, ordering and continuation/error logic. Before we go over a plugin route, first we must clearly define the responsibilities of such subsystem. Even the one you point out, handling module_utils packaging, is far from trivial and merits it’s own subsystem with clear input interfaces and expected result package/environment before we can even add module execution, data/arguments passing, debugging frameworks and feedback.
I welcome the conversation, but I would like to set expectations also, this is not something that would take any less than multiple versions of ansible core to implement and even then, will not ship with core to support multiple versions of Python because we cannot commit to testing those versions already, which is the main reason we phase them out.
The biggest problem i currently see with distributing either the source or the binaries is cross-platform compability: you either need the entire (rust/golang/…) toolchain on the controller and potentially build for arm64/aarch64/amd64 before you can even start, or (imo worse) distribute all of those toolchains on the target and execute it there (faster, but ‘pollutes’ the ansible managed hosts).
Potential ideas i had:
add a way to distribute binaries in a collection. This would introduce more build steps into ansible-galaxy collection build, requiring (cross-)compiles on the developer machine
add a step to compile the source code into binaries upon collection installation - the collection itself just packages the source code for the module (would also require formalization/specification on how that would look like), and shifts the choice of which targets (amd64/arm64/…) to compile to onto the user/consumer of the collection.
It probably makes more sense to ship built binaries in the collection in order to not annoy users with having to setup toolchains for go / rust in order to use a collection, with the risk of having modules which only work on some architectures.
While Ansible does not have a binary/arch facility, I’ve seen users just ship binary modules and name them or collection appropriately. Like mystuff.amd64.module1 and mystuff.arm64.module1 or mystuff.mycol.moidule1_amd64 and mystuff.mycoll.module1_arm64 and then use a the arch to compose the name of the modules/collections to use against the target.
If Ansible were to ever ship binaries (in collections), i’d love to have a proper construct to avoid needing to specify the correct architecture while calling.
This could also result in automatically analysing collections, tagging them as arm64/aarch64 compatible etc. This is currently mostly a non-issue due to basically all modules being python(3) based, but already the problems of “is this required python library built with arm/darwin support” trickle into more exotic plugins.
The arch detection could be done with an action plugin, that checks for a fact that specifies the architecture (and if it’s not there, acquires the fact’s value from the remote), and picks the actual module based on the task’s name and the architecture. If you then have a meta/runtime.yml redirect of the module name (without architecture) for action plugins to that action plugin, you can simply use that module name in a playbook and let the action plugin do the work of figuring out the architecture and selecting the matching module.
This has been possible for some time; I’m not sure whether someone already used that in practice though. It’s also not the best solution ever, but at least it should work right now
Ansible has supported binary and ‘non python scripting modules’ since the beginning, though core itself has only really shipped with Python modules (later on powershell for windows) and some facilities like ‘module_utils’ are Python only there are plenty of shell/perl/golang/haskell/etc modules out there.