We are developing an Ansible collection in which our modules are written in Go.
This brings several benefits that align directly with the goals of the collection (VMware and OpenStack bindings), but also benefits Ansible modules execution in general. For example, runtime performance is significantly improved with these small, compiled binary modules.
Last week, I proposed a PR to Ansible Core to extend the sanity checks to support Go modules: validating JSON input/output for Go-based modules, checking the Go file skeleton, ensuring the module documentation matches the Go module arguments, and so on.
The goal is to avoid having to skip all sanity checks, especially since this is a certified collection.
Iām opening this thread for several reasons.
First, Iām curious whether there is interest and usage of Go with Ansible.
Second, Iām wondering whether we could contribute to the documentation or help write guidelines for developing and using Go-based modules. Finally, Iād like to know if there is any interest in creating a third-party testing tool specifically for Go modules.
Thank you in advance for your thoughts and feedback.
I think your idea is interesting. But the efforts and impacts must be carefully considered and evaluated in advance. Basically I think that considering of further integration scenarios in Ansible, in this case the Go programming language, is worth looking into. Even if we only look at it from a high level of abstraction. If it is permitted I would like to redirect your attention to another perspective.
One particular difficulty I see with this approach is the use of compiled binary modules to improve performance. I am not a Go expert, but I assume these depend on the architecture they were compiled for, i.e. x86, arm, ppc, etc. This raises an important question for me: How did you solve the problem of using the right binary modules in a heterogeneous system landscape with different architectures? Perhaps you could simply describe how you solved this challenge.
That is a fundamental question for me. For me one valuable advantage of Ansible is the use of Python source code and therewith the independence from architecture. Binary modules can speed up automation processes from a performance perspective, but they can also make them significantly more complicated to use.
I am aware that this point of view goes in a completely different direction than your suggestion. In my opinion there should be a coherent approach here to convince users of the added value.
Using the right binary should be relatively easy: use an action plugin that determines the targetās architecture (and calls ansible.builtin.setup to figure it out if it hasnāt already), and picks the matching binary module file.
I built one as a proof of concept a few years ago. Although for speed it wasnāt using setup, although the logic from something like package could easily be used to only run setup if needed.
Hey,
thank you for your reply.
Yes this collection aims to be execution on x86_64 architecture.
For that collection in particular there is no real need to check the arch and build for different system.
But indeed for other collections that can be useful.
Also, we are providing Ansible Execution Environment built with all the requirements for this collection which allow us to provide a self-contain environment.
I understand the binary modules are kind of a different approach to the classic Python of Ansible, but this has itās benefits for specific needs.
The three biggest problems I see with binary modules are
The arch selection
Ensuring the binary is compiled against an API/ABI compatible with the target
The size of the binary
The arch selection can certainly be done as sivel showed through an action plugin that selects the arch either through a fact, one time command, or some variable set for the host.
The API/ABI problem can come up if the binary is compiled in a way not compatible with the host it will be executed on. For example glibc requiring version 1.23 based on what it was built with but the host only has glibc 1.22. This is more of a build problem but itās a hard problem to solve once you start getting into a mix of different distros and versions.
The size of a binary is problematic when you start executing these modules against a remote target and not just localhost. Each task will be copying the binary data which can be megabytes in size. This is a lot of data to do for each module invocation and will slow down the playbooks.
None of this is insurmountable but these are pretty big problems to overcome for a heterogeneous set of environments.
For Golang (and Rust and similar languages) API/ABI wonāt be a problem, since programs for it are statically linked anyway But in both cases, the size of the binary could be a problem, since they tend to be quite bigā¦
I always thought it was still dynamically linked to libc (whether glibc/musl/etc) somehow but I guess I havenāt looked too deeply. I do recall reading that golang does the syscalls themselves so makes sense they want to control the whole stack up until there.
Hmm, if they are, I really wonder why the binaries are that extremely huge But indeed, I just checked a Golang and Rust binary, and they are both dynamically linked - against a very small set (smaller for Golang than for Rust), but libc is there.
Itās the same for Rust, I think. I was hoping (before learning about this the hard way) that thereās some basic dead code removal (tree shaking) that would reduce very simple programs (like Hello World programs) to a small size, but that apparently isnāt the case.
In any case, back on topic: I think having a Go module argspec linter makes a lot more sense if there is a Go module framework that the modules use (and the linter is specific to that framework). Some code that handles argument spec validation and coercion similar to the Python AnsibleModule (and the PowerShell/C# equivalents), and provides common helper functions.
Yep I see something akin to the various antsibull tools but specific to the language of choice. It would have common utilities for that language to do sanity checks, build the code in a common way, etc. Not saying antsibull or the community would work on this but thatās where I would see tooling for languages outside of Python and PowerShell being done.
For the size of the binaries itās depend on how is design the collection.
If itās a constellation of single and small modules, the size is not so big (and you can strip the binaries indeed).
The problematic is different if you design it with a big entry point.