Hi folks
I’ve been thinking a bit about how we can better collaborate and build a community around the various AI projects that people are working on.
Currently I see some great ideas, though it’s isolated pockets, and difficult to find, I was surprised to see so much has already happened, though I had no idea about
To improve that I’d like to propose a community-driven repository under the ansible-community GitHub Org for sharing AI Skills etc.
This follows on from the existing ai discussions, in particular How AI can help the Ansible Community & Development
As @oranod mentioned in Community roadmap 2026: Off to the races , @Andersson007 is currently working on updates to Ansible AI Policy? , so stay tuned for that.
Key Principals
- LLM-agnostic: works across AI platforms
- For all: This isn’t just about Ansible Collection development, though all of Ansible
- Community driven: contributions not just welcome, but encouraged
- Community maintained: not Red Hat Supported
- Iterate fast: expect rapid evolution as the space matures
- For practitioners: these are tools for Ansible developers, by Ansible
developers - Use if you want to: complementary to existing workflows, never mandatory
What we’re proposing
A new repository under github.com/ansible-community/ with three areas of focus:
- Agent skills: Reusable skills for AI-assisted Ansible development (scaffolding, review, debugging, fixing)
- Curated awesome-list: Links to community skills repos, MCP servers, AI tools, and resources for specific to Ansible, like awesome-ansible
- Tooling: Testing, validation, and installation helpers for skills
Initial content would include Leo Gallego’s four existing skills (CoP review, scaffold role/collection/EE).
Questions for the community
We wantneed your input on all of this, so please do share your thoughts by replying to this thread:
- Repo name: What should we call this repository?
- Directory structure: Which layout option (A, B, or C) works best? Or propose your own.
- File format: Which skill/guidance format should we focus on first? (SKILL.md, AGENTS.md, both?)
- Skills wishlist: What other skills would help your Ansible development workflow?
- Agent platforms: What LLM/agent platforms are you using for Ansible development today?
- Quality and review: How should we handle quality review of contributed skills? Peer review? Automated testing? (Likely both, though what good ways are there to test?)
- Pain points: What collection maintenance or development pain points could AI skills address for you?
- Existing project: What other related projects are there that we should be inspired by?
Want to help build this
We’re looking for people to help get this off the ground:
- Skill authors - Have you written agent skills or prompts for Ansible work? We’d love to include them (with full attribution).
- Maintainers and reviewers - Interested in helping review and curate contributed skills?
- Tooling contributors - Want to help build validation, testing, or installation tooling?
- Documentation writers - Help us write clear contributing guides and skill authoring documentation.
- Users of other AI platforms - Using Copilot, Cursor, Cody, or another agent with Ansible? Help us ensure cross-platform compatibility.
If any of this sounds interesting to you, reply to this thread. The more perspectives we have early on, the better this will be for everyone.
Proposed directory structure
Every week there seems to be a new “hot way” to layout and share Skills, etc.
So in the same way this repo needs to be llm-agnostic, we need to be tool-agnostic as well
Option A: By content type
repo/
skills/
ansible_cop_review/SKILL.md
ansible_scaffold_role/SKILL.md
...
awesome_list/README.md
tooling/
/README.md
tool1/...
tool1/...
...
Pros: Clear separation of concerns, easy to find what you’re looking for.
Cons: Flat skills list could get unwieldy as it grows.
Option B: By workflow stage
repo/
scaffold/
role/SKILL.md
collection/SKILL.md
ee/SKILL.md
review/
cop_review/SKILL.md
debug/
maintain/
awesome_list/README.md
tooling/README.md
Pros: Organised by what you’re trying to do.
Cons: Some skills span multiple stages; categorisation can be subjective.
Option C: Flat with metadata
repo/
skills/
ansible_cop_review/SKILL.md # frontmatter: category: review
ansible_scaffold_role/SKILL.md # frontmatter: category: scaffold
...
awesome_list/README.md
tooling/README.md
Pros: Simple layout, categories in metadata allow flexible filtering.
Cons: Requires tooling to make categories discoverable.
Skills we’d like to see
Beyond the initial four skills from Leo’s repo, here are gaps we’ve identified:
| Skill idea | Why it would help |
|---|---|
| ansible-test sanity fixer | Diagnose and fix ansible-test sanity failures; map error codes to fixes. Would help with requiring collections to run against ansible-core:devel |
| Collection best practices | Opinionated set of collection development patterns (from inclusion criteria, made reusable) |
| AGENTS.md generator | Generate a project-specific AGENTS.md for any Ansible collection (note: ansible-creator already does a basic version via ansible-creator init) |
| Module documentation fixer | Can we move from generic DOCUMENTATION/EXAMPLES to real world examples? |
What skills would help your workflow? We’d love to hear what pain points you’d want AI assistance with.
Tooling and testing
This is an area that needs community input. Some questions:
- Validation: How do we ensure contributed skills are well-formed and actually useful?
- Testing: Can we build a skill testing framework? What would that look like?
- Linting: Markdown linting is a start, but what about content quality?
- Installation: How should users discover and install skills from this repo?
What’s already happening
Having a look across, ansible, ansible-community and ansible-collections I see some great things.
Please reply to this thread with anything I’ve missed.
There were more examples than I expected, so note I used Claude to generate the summaries.
AGENTS.md/CLAUDE.md
| Repo | Purpose | Reusable? |
|---|---|---|
| ansible/ansible | Comprehensive PR review and dev guide for ansible-core: licensing (GPLv3/BSD-2-Clause), testing commands, CI failure debugging, code style, changelog requirements | |
| ansible/ansible-creator | Basic project interaction (use uv run, run tox) |
|
| ansible/ansible-creator docs/agents.md | 567-line comprehensive Ansible coding guidelines: Zen of Ansible, collection/playbook project structure, YAML/Python formatting, naming conventions, roles, plugins, inventories, playbooks, Jinja2 | |
| ansible/ansible-creator (scaffolded) | Template AGENTS.md generated by ansible-creator init - links to docs/agents.md |
|
| ansible/vscode-ansible | VS Code extension dev: code validation, commit messages (50/72 rule), PR structure, testing, docs | Partly - conventional commits and PR patterns are reusable |
| ansible/team-devtools | Static checks: TOML formatting (tombi), pre-commit config, conventional commits | Partly - tooling config patterns |
| ansible/platform-service-framework | Jinja template for generating AGENTS.md in new projects | Meta - generates AGENTS.md |
| ansible-collections/ansible-inclusion | AI-guided collection inclusion review (parse galaxy.yml, checklist, generate report) | Yes - inclusion criteria are largely about good collection development practices |
| ansible/ansible-backstage-plugins | CLAUDE.md with dev commands, architecture, plugin structure for TypeScript/React Backstage monorepo | |
| ansible/metrics-service | CLAUDE.md for Django project: 658 lines covering dev setup, testing, architecture, task management, feature flags | |
| ansible-collections/community.beszel | Collection dev guide: Molecule testing, antsibull-nox, ruff, changelog management | |
| ansible-collections/community.clickhouse | Collection dev: coding guidelines (KISS/DRY/YAGNI), module_utils patterns, conventional commits, changelog fragments, subagent delegation | |
| ansible-collections/community.openwrt | Role-to-collection migration guide for OpenWRT (shell modules, no Python) | Partly - migration patterns could be generalised |
| ansible-community/ara | MCP server guidance: ARA data model, read-only investigation patterns, token economy, tool selection |
SKILLS.md
| Repo | Skills | Notes |
|---|---|---|
| ansible-collections/community.clickhouse | 3 skills in .agents/skills/: pr-review, release, run-tests |