Ansible Golang Modules and sanity checks

Hi folks,

We are developing an Ansible collection in which our modules are written in Go.

This brings several benefits that align directly with the goals of the collection (VMware and OpenStack bindings), but also benefits Ansible modules execution in general. For example, runtime performance is significantly improved with these small, compiled binary modules.

Last week, I proposed a PR to Ansible Core to extend the sanity checks to support Go modules: validating JSON input/output for Go-based modules, checking the Go file skeleton, ensuring the module documentation matches the Go module arguments, and so on.

The goal is to avoid having to skip all sanity checks, especially since this is a certified collection.

I’m opening this thread for several reasons.
First, I’m curious whether there is interest and usage of Go with Ansible.
Second, I’m wondering whether we could contribute to the documentation or help write guidelines for developing and using Go-based modules. Finally, I’d like to know if there is any interest in creating a third-party testing tool specifically for Go modules.

Thank you in advance for your thoughts and feedback.

2 Likes

It shouldn’t be certified if it contains Go modules, as noted in https://connect.redhat.com/sites/default/files/2025-06/Ansible-Certification-Workflow-Guide202506.pdf :

The Collection must not contain any binary files. Only plugins written in Python or
Powershell will be certified.

3 Likes

Hello Mathieu [ @matbu ], :waving_hand:

I think your idea is interesting. But the efforts and impacts must be carefully considered and evaluated in advance. Basically I think that considering of further integration scenarios in Ansible, in this case the Go programming language, is worth looking into. Even if we only look at it from a high level of abstraction. If it is permitted I would like to redirect your attention to another perspective.

One particular difficulty I see with this approach is the use of compiled binary modules to improve performance. I am not a Go expert, but I assume these depend on the architecture they were compiled for, i.e. x86, arm, ppc, etc. This raises an important question for me: How did you solve the problem of using the right binary modules in a heterogeneous system landscape with different architectures? Perhaps you could simply describe how you solved this challenge.

That is a fundamental question for me. For me one valuable advantage of Ansible is the use of Python source code and therewith the independence from architecture. Binary modules can speed up automation processes from a performance perspective, but they can also make them significantly more complicated to use.

I am aware that this point of view goes in a completely different direction than your suggestion. In my opinion there should be a coherent approach here to convince users of the added value.

Thanks and best regards
Stefan

1 Like

Using the right binary should be relatively easy: use an action plugin that determines the target’s architecture (and calls ansible.builtin.setup to figure it out if it hasn’t already), and picks the matching binary module file.

In os_migrate.vmware_migration_kit, which seems to be (one of?) the collection in question, there is no action plugin though (vmware-migration-kit/plugins at main Ā· os-migrate/vmware-migration-kit Ā· GitHub), so I guess the collection is specific to one architecture?

1 Like

I guess the rules are outdated then :slight_smile: Thanks for linking that document, I always wanted to look at it, but didn’t knew where to find it.

1 Like

I built one as a proof of concept a few years ago. Although for speed it wasn’t using setup, although the logic from something like package could easily be used to only run setup if needed.

2 Likes

Hey,
thank you for your reply.
Yes this collection aims to be execution on x86_64 architecture.
For that collection in particular there is no real need to check the arch and build for different system.

But indeed for other collections that can be useful.

1 Like

Also, we are providing Ansible Execution Environment built with all the requirements for this collection which allow us to provide a self-contain environment.

I understand the binary modules are kind of a different approach to the classic Python of Ansible, but this has it’s benefits for specific needs.

1 Like

The three biggest problems I see with binary modules are

  • The arch selection
  • Ensuring the binary is compiled against an API/ABI compatible with the target
  • The size of the binary

The arch selection can certainly be done as sivel showed through an action plugin that selects the arch either through a fact, one time command, or some variable set for the host.

The API/ABI problem can come up if the binary is compiled in a way not compatible with the host it will be executed on. For example glibc requiring version 1.23 based on what it was built with but the host only has glibc 1.22. This is more of a build problem but it’s a hard problem to solve once you start getting into a mix of different distros and versions.

The size of a binary is problematic when you start executing these modules against a remote target and not just localhost. Each task will be copying the binary data which can be megabytes in size. This is a lot of data to do for each module invocation and will slow down the playbooks.

None of this is insurmountable but these are pretty big problems to overcome for a heterogeneous set of environments.

2 Likes

For Golang (and Rust and similar languages) API/ABI won’t be a problem, since programs for it are statically linked anyway :wink: But in both cases, the size of the binary could be a problem, since they tend to be quite big…

I always thought it was still dynamically linked to libc (whether glibc/musl/etc) somehow but I guess I haven’t looked too deeply. I do recall reading that golang does the syscalls themselves so makes sense they want to control the whole stack up until there.

Hmm, if they are, I really wonder why the binaries are that extremely huge :slight_smile: But indeed, I just checked a Golang and Rust binary, and they are both dynamically linked - against a very small set (smaller for Golang than for Rust), but libc is there.

I am not sure about Rust, but for golang in each and every binary golang runtime is included. That is why hello world app in golang is about 2Mb.

It’s the same for Rust, I think. I was hoping (before learning about this the hard way) that there’s some basic dead code removal (tree shaking) that would reduce very simple programs (like Hello World programs) to a small size, but that apparently isn’t the case.

In any case, back on topic: I think having a Go module argspec linter makes a lot more sense if there is a Go module framework that the modules use (and the linter is specific to that framework). Some code that handles argument spec validation and coercion similar to the Python AnsibleModule (and the PowerShell/C# equivalents), and provides common helper functions.

Yep I see something akin to the various antsibull tools but specific to the language of choice. It would have common utilities for that language to do sanity checks, build the code in a common way, etc. Not saying antsibull or the community would work on this but that’s where I would see tooling for languages outside of Python and PowerShell being done.

For the size of the binaries it’s depend on how is design the collection.
If it’s a constellation of single and small modules, the size is not so big (and you can strip the binaries indeed).
The problematic is different if you design it with a big entry point.