Testing collections within the ansible package

:information_source: Background

Up until now, much of our work on the ansible package has gone towards reviewing collections for inclusion and releasing the package. I’d like to do more to test and improve the collections in the ansible package. To that end, I’ve been working on changes to antsibull and the release playbook to allow us to more effectively test the package. In addition to @mariolenz’s work on removing unmaintained collections, this should help improve the quality of the ansible community package.

:ballot_box_with_check: Byte-compilation test

As part of the release playbook, we now run a test to ensure that Python files in the ansible package are syntactically correct and can (at least with the Python interpreter used to build the release). This was added in build-release: ensure Python files in the package compile (#552) · ansible-community/antsibull@27754fb · GitHub.

:stop_button: verify-upstream

I have a WIP patch that clones collections and ensures that its contents roughly match the Galaxy collection artifact. We cannot reliably run tests from the Galaxy collection artifacts, as they may be missing files needed to run tests, so it’s important that the contents in the git repositories are actually what’s being uploaded to Galaxy.

:stop_button: Sanity tests

I’ll extend that PR to support running a subset of sanity tests across the cloned collections. I’m proposing we enforce the following sanity tests:

  1. ansible-doc
  2. compile
  3. validate-modules
  4. yamllint

I want to avoid running sanity tests that are cosmetic in nature or take a particularly long time to run or that overlap with other tests. There are over 100 collections in the ansible package, so it’s important that each individual collection’s tests don’t take too long. I mainly want to ensure that code is syntactically valid [1] and we can build docs without errors. We might also consider running antsibull-docs lint-collection-docs.

These tests would ideally be run on a regularly scheduled basis and not as part of the release process; we don’t need to make that any longer/more complicated.

:exclamation: Initial results

The initial results are… not super promising. I ran the four tests above over all the collections from 9.0.0b1 using ansible-core 2.16.0rc1. It took a little over an hour on my laptop. 40.20% (41 / 102) of the collections failed at least one of these tests. There are a smaller number of collections with file differences between tagged releases and Galaxy artifacts.

:thinking: What do we about this?

We can always go through the collections and file issues. We can remove collections that don’t respond after a certain amount through the regular process for nonresponsive collection maintainers. I’m open to ideas here.

:credit_card: Credits

Thanks to @rfc2549 for his previous work on running sanity tests for collections in the ansible package.


  1. We’ve had multiple problems with syntax errors that break packages. ↩︎

7 Likes

This is the minimum, but maybe we can reduce the load for us a little bit.

Can we add at least some some of those tests to the collection_template and make them a requirement?

Hopefully, some collections would fix those issues without us having to ping them if they would just see there’s a problem. Or are you only testing things that are already part of the current CI Testing requirements or can only be done in the context of the Ansible community package?

We already require that these tests are run and include a CI configuration in the collection_template to run sanity tests. Some [1] collections just don’t follow these guidelines.


  1. Well, more than some. A significant amount of collections fail these sanity tests. ↩︎

Yeah, these are just generic sanity tests. The code I wrote locally (but haven’t committed/pushed yet) to run these tests just loops over the collections and runs the ansible-test sanity command for each collection that collections should already be running themselves in their CI systems but with only a smaller set of the most important sanity tests selected to save time.

I wonder if there’s a way (in the medium to long term) to make this process more “push” instead of “pull”.

What if, as part of the inclusion requirements, the sanity (maybe later other?) test results need to be posted to a centralized location/web hook/application (TBD)?

I think we already either encourage or require regularly scheduled test runs independent of PR/push runs.

The auditing then can happen in a way that does not depend on running the tests, which can be complicated when you consider things like collection requirements.

If the test results do not pass (either in general or for the specific sanity tests we want), OR if results are late (like, if we require testing a minimum of weekly, then the latest result in this system should be no more than a week old for example), then this is an audit failure.

This gives collection owners flexibility in how their tests are run and in what environments or CI systems, and decouples the runs from the audit.

Lots of details to hash out, and it’s not a short-term solution, but I like it as a forward-looking path. Thoughts?

In order to be on the safe side, I think it’s better if we test ourselves. Otherwise, we still rely on collections running all the correct and required tests…

Yes, but we would still have to check this manually and open issues in the collection repos.

Can we automate this, create a bot or something that creates an issue if we see any sanity test failures?

1 Like

I tend to agree.

Yes, Github has an API to file issues.

I was thinking about a system that automatically looks for new releases of collections in Ansible, and automatically runs a test-suite on these and rates the collections by which tests pass and which do not. (For some tests it’s more tricky, since the “collection is tagged and tag corresponds to the built collection’s content” can change over time as the tag might be added later, or might get modified, for example when it turned out the wrong commit was tagged and that’s updated. One question is here whether this check should automatically be re-run or not.)

antsibull-build could directly use the output of this system to decide which collections to exclude, and the system could automatically create issues in the collection’s repos for problems.

Nothing that’s quick to implement and will just appear anytime soon, but would be really neat to have such a system :slight_smile:

Other than being a relatively simple approach, the nice thing about just testing all collections on a scheduled basis that we can ensure they continue to work with new ansible-core versions.

My current approach [1] to running sanity dumps the failed tests’ command stdout and JSON output into a combined YAML file, so it’ll be relatively straightforward to devise automation on top of it.


  1. run_tests.py · GitHub is a standalone script POC. I need to integrate this into antsibull-build’s codebase. ↩︎

That’s a cool idea @felixfontein!

One of the things I’ve been thinking about is “as a maintainer, what do I get for being included?” Apart from the obvious “being in the package”, mostly I get a lot of requirements to meet, right?

What I like about your idea is that it’s something we can add to the “pro” side of the balance - I now get my collection tested regularly with everything else to make sure it works and proactively warn me if it doesn’t. As a maintainer, I think I’d value that kind of advance warning - and that makes it easier to “sell” the idea of being an included maintainer in the first place (and it helps maintainers interact with the wider community more, at least a bit)

Does that make sense, or am I way off base here?

I would say the main benefit of inclusion is a review from an expert collection maintainer and making the collection more visible/accessible to users, but that’s orthogonal. What I’m proposing is more of an enforcement action. The collections are already required to run these tests on a regular basis in their CI pipelines, but apparently, a significant number do not. This new tooling will allow us to find out broken collections and let them know what needs fixing to stay included in the ansible package.

3 Likes

I’m marking this as community-wg-nextmtg. I have continued working on [RFC] add `verify-upstreams` and `sanity-tests` subcommands by gotmax23 · Pull Request #556 · ansible-community/antsibull · GitHub to enable this.

1 Like

We discussed this in today’s meeting.

We discussed next steps:

<gotmax23> So, next steps:
<gotmax23> I guess I'll finish up my PR and put it up for review
<gotmax23> Finish my issue filing scripts and get feedback on the messages
<gotmax23> Probably start a vote on the issue filing plan and template for visibility

Other notes:

  • Make sure to include the ansible-test version when filing collection issues (–@felixfontein)
  • Check that ignore.txt files do not contain any required tests (–@felixfontein)
  • Figure out what to do with ovirt.ovirt’s templated code. Is this a Repository Management violation?
  • File one issue on verify-upstream issues and another for sanity-tests issues.

I opened Remove problematic placeholder vars · Issue #734 · oVirt/ovirt-ansible-collection · GitHub. We should probably add a guideline that it MUST be possible to run ansible-test sanity after checking out the collection’s source repository without running extra scripts to preprocess the sources.

I’ve marked add `verify-upstreams` and `sanity-tests` subcommands by gotmax23 · Pull Request #556 · ansible-community/antsibull · GitHub as ready for review.

I created GitHub - ansible-community/package-test-results: Coordination for Ansible package-wide testing to coordinate filing issues for the broken collections. That repository has:

  1. Data files from antsibull-build verify-upstreams (compares Galaxy collection artifacts and git sources) and antsibull-build sanity-tests (runs ansible-test sanity across collections and combines results)
  2. Script to generate a report of all of a collection’s errors and file an issue.
  3. Issue Jinja2 template
  4. Rendered issues so you can see what this looks like.
  5. GHA workflow to refresh the data files.

I’d appreciate feedback on the code and/or the issue template.

I would appreciate feedback on the code and the issue template (source / rendered issues).`

Curious why GitHub - telekom-mms/ansible-collection-icinga-director: An Ansible collection that contains modules to change objects in Icinga 2 using the director API. is not included. No failed tests?

package-test-results/data/9/ansible-9.1.0-missing-upstreams.yaml at main · ansible-community/package-test-results · GitHub only includes collections that have missing/mismatching files, but https://raw.githubusercontent.com/ansible-community/package-test-results/main/data/9/ansible-9.1.0-sanity.yaml does contains telekom_mms.icinga_director with:

  telekom_mms.icinga_director:
    failed: false
    sanity:
      banned_ignore_entries: []
      cmd:
      - ansible-test
      - sanity
      - --test=ansible-doc
      - --test=compile
      - --test=validate-modules
      - --test=yamllint
      - --docker
      - --lint
      ignore_entries: []
      ignores_file: null
      returncode: 0
      runtime: 0.45
      stderr: 'Starting new "ansible-test-controller-QcNQFgwV" container.

        Adding "ansible-test-controller-QcNQFgwV" to container database.

        Running sanity test "ansible-doc"

        Running sanity test "compile" on Python 2.7

        Running sanity test "compile" on Python 3.6

        Running sanity test "compile" on Python 3.7

        Running sanity test "compile" on Python 3.8

        Running sanity test "compile" on Python 3.9

        Running sanity test "compile" on Python 3.10

        Running sanity test "compile" on Python 3.11

        Running sanity test "compile" on Python 3.12

        Running sanity test "validate-modules"

        Running sanity test "yamllint"

        '
      stdout: ''
      test_json: {}

Since there’s no problem, no issue would be created :slight_smile:

1 Like

On the issue template, it says “Let me know if you see errors in this issue.” - Is it obvious who the ‘me’ is when someone gets this issue?

1 Like

It should be, as they’ll be filed from my Github account.