What examples have we seen so far in across out ecosystem (GitHub, Forum, Reddit, etc) where someone has use an AI related tool to support their contribution.
Where could AI Tooling help
For examples, and real world examples of AI helping, such as:
Examples of improved testing and some long term bugs found
Improving L10N translations
Improving the documentation via AI (I care about this)
Improving the documentation to allow LLMs to be better trained (I care about this)
Community discussion around CONTENT.md (CLAUDE.md, etc) to provide context to the various tools, or more generally (/structuring) the project to be better suited for AI contribution
What good success stories exist in other (Open Source) Projects
Can you share some examples of good contributions from other projects
For the community.beszel Ansible collection I’ve being using Cursor’s Agent Mode with GPT-5 to help me with writing unit tests for the two new modules added in the 1.3.0 release.
Yes that’s the right pull request. Apart from adding to the Agent context the modules themselves, I also provided links to the community.proxmox collection’s unit tests and explained they’re using the community.internal_test_tools collection in their unit tests. I did this by just simply providing a link to the repos in the Agent prompt. During the models reasoning I could see it was looking at both collections to understand how the various classes and functions are used.
It wasn’t perfect and required several iterations to get to a working state, but the agent knew to run uv run ansible-test sanity –docker and uv run ansible-test units —-docker commands as I instructed it to do so because at that time, the sections for running the various tests were not in community.beszel/CONTRIBUTING.md at main · ansible-collections/community.beszel · GitHub .
Looking forward, I will probably take a look at using Cursor – Rules so memory is retained between chats / completions.
First, someone recently pointed out to me deepwiki.com which basically clones a git repo, parses everything and makes docs out of it, for example: ansible/ansible | DeepWiki
I have no strong opinion but to provide some nuance, it built docs for ara that aren’t perfectly accurate or always useful, but at first glance felt good enough to not be harmful. I wouldn’t rely on them or support them but it might be a net positive or perhaps give ideas on where to improve docs. ¯\(ツ)/¯
Some time ago I experimented with a MCP server for ara as a learning experience and was pleasantly surprised. I can go into more details but in a nutshell, the idea was to enable a local LLM model (like mistral, qwen, deepseek, etc.) to access recorded playbook results, files, hosts, tasks and various metrics from ara. It works, it is able to find issues, troubleshoot them and propose fixes. I recorded a brief demo about it here: ARA Records Ansible: "Some time ago someone asked me about whether I th…" - Fosstodon
I would like to spend some more time on this in the not too distant future to see what works and flesh it out.
I’m planning to bring this up as a discussion topic at the next DaWGs meeting. I’m curious about how we can get LLMs pointed to the latest version of the package docs and make it easier for them to injest that content.
For pointing LLMs to latest, I think we just keep up with our redirect strategy and good use of canonical urls. The llms.txt standard seems like a good way to make the docsite LLM friendly.
The issue I think we’ll have with getting an llms.txt file for latest package docs is the same old problem with the Sphinx build performance. It’s a resource hogg. Generating the Markdown from RST means resolving a lot of includes directives and reading files into memory. Doing all that on top of an HTML build seems to nearly triple the i/o work necessary.