Proposal: New repo for Ansible AI collaboration

Hi folks

I’ve been thinking a bit about how we can better collaborate and build a community around the various AI projects that people are working on.

Currently I see some great ideas, though it’s isolated pockets, and difficult to find, I was surprised to see so much has already happened, though I had no idea about

To improve that I’d like to propose a community-driven repository under the ansible-community GitHub Org for sharing AI Skills etc.

This follows on from the existing ai discussions, in particular How AI can help the Ansible Community & Development

As @oranod mentioned in Community roadmap 2026: Off to the races , @Andersson007 is currently working on updates to Ansible AI Policy? , so stay tuned for that.

Key Principals

  • LLM-agnostic: works across AI platforms
  • For all: This isn’t just about Ansible Collection development, though all of Ansible
  • Community driven: contributions not just welcome, but encouraged
  • Community maintained: not Red Hat Supported
  • Iterate fast: expect rapid evolution as the space matures
  • For practitioners: these are tools for Ansible developers, by Ansible
    developers
  • Use if you want to: complementary to existing workflows, never mandatory

What we’re proposing

A new repository under github.com/ansible-community/ with three areas of focus:

  1. Agent skills: Reusable skills for AI-assisted Ansible development (scaffolding, review, debugging, fixing)
  2. Curated awesome-list: Links to community skills repos, MCP servers, AI tools, and resources for specific to Ansible, like awesome-ansible
  3. Tooling: Testing, validation, and installation helpers for skills

Initial content would include Leo Gallego’s four existing skills (CoP review, scaffold role/collection/EE).

Questions for the community

We wantneed your input on all of this, so please do share your thoughts by replying to this thread:

  1. Repo name: What should we call this repository?
  2. Directory structure: Which layout option (A, B, or C) works best? Or propose your own.
  3. File format: Which skill/guidance format should we focus on first? (SKILL.md, AGENTS.md, both?)
  4. Skills wishlist: What other skills would help your Ansible development workflow?
  5. Agent platforms: What LLM/agent platforms are you using for Ansible development today?
  6. Quality and review: How should we handle quality review of contributed skills? Peer review? Automated testing? (Likely both, though what good ways are there to test?)
  7. Pain points: What collection maintenance or development pain points could AI skills address for you?
  8. Existing project: What other related projects are there that we should be inspired by?

Want to help build this

We’re looking for people to help get this off the ground:

  • Skill authors - Have you written agent skills or prompts for Ansible work? We’d love to include them (with full attribution).
  • Maintainers and reviewers - Interested in helping review and curate contributed skills?
  • Tooling contributors - Want to help build validation, testing, or installation tooling?
  • Documentation writers - Help us write clear contributing guides and skill authoring documentation.
  • Users of other AI platforms - Using Copilot, Cursor, Cody, or another agent with Ansible? Help us ensure cross-platform compatibility.

If any of this sounds interesting to you, reply to this thread. The more perspectives we have early on, the better this will be for everyone.

Proposed directory structure

Every week there seems to be a new “hot way” to layout and share Skills, etc.
So in the same way this repo needs to be llm-agnostic, we need to be tool-agnostic as well

Option A: By content type

repo/
  skills/
    ansible_cop_review/SKILL.md
    ansible_scaffold_role/SKILL.md
    ...
  awesome_list/README.md
  tooling/
    /README.md
    tool1/...
    tool1/...
    ...

Pros: Clear separation of concerns, easy to find what you’re looking for.
Cons: Flat skills list could get unwieldy as it grows.

Option B: By workflow stage

repo/
  scaffold/
    role/SKILL.md
    collection/SKILL.md
    ee/SKILL.md
  review/
    cop_review/SKILL.md
  debug/
  maintain/
  awesome_list/README.md
  tooling/README.md

Pros: Organised by what you’re trying to do.
Cons: Some skills span multiple stages; categorisation can be subjective.

Option C: Flat with metadata

repo/
  skills/
    ansible_cop_review/SKILL.md      # frontmatter: category: review
    ansible_scaffold_role/SKILL.md   # frontmatter: category: scaffold
    ...
  awesome_list/README.md
  tooling/README.md

Pros: Simple layout, categories in metadata allow flexible filtering.
Cons: Requires tooling to make categories discoverable.

Skills we’d like to see

Beyond the initial four skills from Leo’s repo, here are gaps we’ve identified:

Skill idea Why it would help
ansible-test sanity fixer Diagnose and fix ansible-test sanity failures; map error codes to fixes. Would help with requiring collections to run against ansible-core:devel
Collection best practices Opinionated set of collection development patterns (from inclusion criteria, made reusable)
AGENTS.md generator Generate a project-specific AGENTS.md for any Ansible collection (note: ansible-creator already does a basic version via ansible-creator init)
Module documentation fixer Can we move from generic DOCUMENTATION/EXAMPLES to real world examples?

What skills would help your workflow? We’d love to hear what pain points you’d want AI assistance with.

Tooling and testing

This is an area that needs community input. Some questions:

  • Validation: How do we ensure contributed skills are well-formed and actually useful?
  • Testing: Can we build a skill testing framework? What would that look like?
  • Linting: Markdown linting is a start, but what about content quality?
  • Installation: How should users discover and install skills from this repo?

What’s already happening

Having a look across, ansible, ansible-community and ansible-collections I see some great things.

Please reply to this thread with anything I’ve missed.

There were more examples than I expected, so note I used Claude to generate the summaries.

AGENTS.md/CLAUDE.md

Repo Purpose Reusable?
ansible/ansible Comprehensive PR review and dev guide for ansible-core: licensing (GPLv3/BSD-2-Clause), testing commands, CI failure debugging, code style, changelog requirements :white_check_mark: Yes - PR review and testing patterns are widely applicable
ansible/ansible-creator Basic project interaction (use uv run, run tox) :white_check_mark: Yes - simple dev workflow pattern
ansible/ansible-creator docs/agents.md 567-line comprehensive Ansible coding guidelines: Zen of Ansible, collection/playbook project structure, YAML/Python formatting, naming conventions, roles, plugins, inventories, playbooks, Jinja2 :white_check_mark: Yes - the canonical “how to write Ansible content” guide for agents
ansible/ansible-creator (scaffolded) Template AGENTS.md generated by ansible-creator init - links to docs/agents.md :white_check_mark: Yes - this is what new projects get by default
ansible/vscode-ansible VS Code extension dev: code validation, commit messages (50/72 rule), PR structure, testing, docs Partly - conventional commits and PR patterns are reusable
ansible/team-devtools Static checks: TOML formatting (tombi), pre-commit config, conventional commits Partly - tooling config patterns
ansible/platform-service-framework Jinja template for generating AGENTS.md in new projects Meta - generates AGENTS.md
ansible-collections/ansible-inclusion AI-guided collection inclusion review (parse galaxy.yml, checklist, generate report) Yes - inclusion criteria are largely about good collection development practices
ansible/ansible-backstage-plugins CLAUDE.md with dev commands, architecture, plugin structure for TypeScript/React Backstage monorepo
ansible/metrics-service CLAUDE.md for Django project: 658 lines covering dev setup, testing, architecture, task management, feature flags
ansible-collections/community.beszel Collection dev guide: Molecule testing, antsibull-nox, ruff, changelog management :white_check_mark: Yes - testing and dev patterns are universal across collections
ansible-collections/community.clickhouse Collection dev: coding guidelines (KISS/DRY/YAGNI), module_utils patterns, conventional commits, changelog fragments, subagent delegation :white_check_mark: Yes - excellent collection development template
ansible-collections/community.openwrt Role-to-collection migration guide for OpenWRT (shell modules, no Python) Partly - migration patterns could be generalised
ansible-community/ara MCP server guidance: ARA data model, read-only investigation patterns, token economy, tool selection :white_check_mark: Yes - MCP server pattern and token-aware design

SKILLS.md

Repo Skills Notes
ansible-collections/community.clickhouse 3 skills in .agents/skills/: pr-review, release, run-tests :white_check_mark: Yes - collection PR review and release patterns are very reusable

Missing the github part on that link. The repo is in GitHub - leogallego/claude-ansible-skills: A collection of Claude Code skills for Ansible automation development following Red Hat Communities of Practice (CoP) good practices. · GitHub

1 Like

Thanks russoz, fixed.

1 Like