Proposal: New repo for Ansible AI collaboration

Hi folks

I’ve been thinking a bit about how we can better collaborate and build a community around the various AI projects that people are working on.

Currently I see some great ideas, though it’s isolated pockets, and difficult to find, I was surprised to see so much has already happened, though I had no idea about

To improve that I’d like to propose a community-driven repository under the ansible-community GitHub Org for sharing AI Skills etc.

This follows on from the existing ai discussions, in particular How AI can help the Ansible Community & Development

As @oranod mentioned in Community roadmap 2026: Off to the races , @Andersson007 is currently working on updates to Ansible AI Policy? , so stay tuned for that.

Key Principals

  • LLM-agnostic: works across AI platforms
  • For all: This isn’t just about Ansible Collection development, though all of Ansible
  • Community driven: contributions not just welcome, but encouraged
  • Community maintained: not Red Hat Supported
  • Iterate fast: expect rapid evolution as the space matures
  • For practitioners: these are tools for Ansible developers, by Ansible
    developers
  • Use if you want to: complementary to existing workflows, never mandatory

What we’re proposing

A new repository under github.com/ansible-community/ with three areas of focus:

  1. Agent skills: Reusable skills for AI-assisted Ansible development (scaffolding, review, debugging, fixing)
  2. Curated awesome-list: Links to community skills repos, MCP servers, AI tools, and resources for specific to Ansible, like awesome-ansible
  3. Tooling: Testing, validation, and installation helpers for skills

Initial content would include Leo Gallego’s four existing skills (CoP review, scaffold role/collection/EE).

Questions for the community

We wantneed your input on all of this, so please do share your thoughts by replying to this thread:

  1. Repo name: What should we call this repository?
  2. Directory structure: Which layout option (A, B, or C) works best? Or propose your own.
  3. File format: Which skill/guidance format should we focus on first? (SKILL.md, AGENTS.md, both?)
  4. Skills wishlist: What other skills would help your Ansible development workflow?
  5. Agent platforms: What LLM/agent platforms are you using for Ansible development today?
  6. Quality and review: How should we handle quality review of contributed skills? Peer review? Automated testing? (Likely both, though what good ways are there to test?)
  7. Pain points: What collection maintenance or development pain points could AI skills address for you?
  8. Existing project: What other related projects are there that we should be inspired by?

Want to help build this

We’re looking for people to help get this off the ground:

  • Skill authors - Have you written agent skills or prompts for Ansible work? We’d love to include them (with full attribution).
  • Maintainers and reviewers - Interested in helping review and curate contributed skills?
  • Tooling contributors - Want to help build validation, testing, or installation tooling?
  • Documentation writers - Help us write clear contributing guides and skill authoring documentation.
  • Users of other AI platforms - Using Copilot, Cursor, Cody, or another agent with Ansible? Help us ensure cross-platform compatibility.

If any of this sounds interesting to you, reply to this thread. The more perspectives we have early on, the better this will be for everyone.

Proposed directory structure

Every week there seems to be a new “hot way” to layout and share Skills, etc.
So in the same way this repo needs to be llm-agnostic, we need to be tool-agnostic as well

Option A: By content type

repo/
  skills/
    ansible_cop_review/SKILL.md
    ansible_scaffold_role/SKILL.md
    ...
  awesome_list/README.md
  tooling/
    /README.md
    tool1/...
    tool1/...
    ...

Pros: Clear separation of concerns, easy to find what you’re looking for.
Cons: Flat skills list could get unwieldy as it grows.

Option B: By workflow stage

repo/
  scaffold/
    role/SKILL.md
    collection/SKILL.md
    ee/SKILL.md
  review/
    cop_review/SKILL.md
  debug/
  maintain/
  awesome_list/README.md
  tooling/README.md

Pros: Organised by what you’re trying to do.
Cons: Some skills span multiple stages; categorisation can be subjective.

Option C: Flat with metadata

repo/
  skills/
    ansible_cop_review/SKILL.md      # frontmatter: category: review
    ansible_scaffold_role/SKILL.md   # frontmatter: category: scaffold
    ...
  awesome_list/README.md
  tooling/README.md

Pros: Simple layout, categories in metadata allow flexible filtering.
Cons: Requires tooling to make categories discoverable.

Skills we’d like to see

Beyond the initial four skills from Leo’s repo, here are gaps we’ve identified:

Skill idea Why it would help
ansible-test sanity fixer Diagnose and fix ansible-test sanity failures; map error codes to fixes. Would help with requiring collections to run against ansible-core:devel
Collection best practices Opinionated set of collection development patterns (from inclusion criteria, made reusable)
AGENTS.md generator Generate a project-specific AGENTS.md for any Ansible collection (note: ansible-creator already does a basic version via ansible-creator init)
Module documentation fixer Can we move from generic DOCUMENTATION/EXAMPLES to real world examples?

What skills would help your workflow? We’d love to hear what pain points you’d want AI assistance with.

Tooling and testing

This is an area that needs community input. Some questions:

  • Validation: How do we ensure contributed skills are well-formed and actually useful?
  • Testing: Can we build a skill testing framework? What would that look like?
  • Linting: Markdown linting is a start, but what about content quality?
  • Installation: How should users discover and install skills from this repo?

What’s already happening

Having a look across, ansible, ansible-community and ansible-collections I see some great things.

Please reply to this thread with anything I’ve missed.

There were more examples than I expected, so note I used Claude to generate the summaries.

AGENTS.md/CLAUDE.md

Repo Purpose Reusable?
ansible/ansible Comprehensive PR review and dev guide for ansible-core: licensing (GPLv3/BSD-2-Clause), testing commands, CI failure debugging, code style, changelog requirements :white_check_mark: Yes - PR review and testing patterns are widely applicable
ansible/ansible-creator Basic project interaction (use uv run, run tox) :white_check_mark: Yes - simple dev workflow pattern
ansible/ansible-creator docs/agents.md 567-line comprehensive Ansible coding guidelines: Zen of Ansible, collection/playbook project structure, YAML/Python formatting, naming conventions, roles, plugins, inventories, playbooks, Jinja2 :white_check_mark: Yes - the canonical “how to write Ansible content” guide for agents
ansible/ansible-creator (scaffolded) Template AGENTS.md generated by ansible-creator init - links to docs/agents.md :white_check_mark: Yes - this is what new projects get by default
ansible/vscode-ansible VS Code extension dev: code validation, commit messages (50/72 rule), PR structure, testing, docs Partly - conventional commits and PR patterns are reusable
ansible/team-devtools Static checks: TOML formatting (tombi), pre-commit config, conventional commits Partly - tooling config patterns
ansible/platform-service-framework Jinja template for generating AGENTS.md in new projects Meta - generates AGENTS.md
ansible-collections/ansible-inclusion AI-guided collection inclusion review (parse galaxy.yml, checklist, generate report) Yes - inclusion criteria are largely about good collection development practices
ansible/ansible-backstage-plugins CLAUDE.md with dev commands, architecture, plugin structure for TypeScript/React Backstage monorepo
ansible/metrics-service CLAUDE.md for Django project: 658 lines covering dev setup, testing, architecture, task management, feature flags
ansible-collections/community.beszel Collection dev guide: Molecule testing, antsibull-nox, ruff, changelog management :white_check_mark: Yes - testing and dev patterns are universal across collections
ansible-collections/community.clickhouse Collection dev: coding guidelines (KISS/DRY/YAGNI), module_utils patterns, conventional commits, changelog fragments, subagent delegation :white_check_mark: Yes - excellent collection development template
ansible-collections/community.openwrt Role-to-collection migration guide for OpenWRT (shell modules, no Python) Partly - migration patterns could be generalised
ansible-community/ara MCP server guidance: ARA data model, read-only investigation patterns, token economy, tool selection :white_check_mark: Yes - MCP server pattern and token-aware design

SKILLS.md

Repo Skills Notes
ansible-collections/community.clickhouse 3 skills in .agents/skills/: pr-review, release, run-tests :white_check_mark: Yes - collection PR review and release patterns are very reusable
3 Likes

Missing the github part on that link. The repo is in GitHub - leogallego/claude-ansible-skills: A collection of Claude Code skills for Ansible automation development following Red Hat Communities of Practice (CoP) good practices. · GitHub

3 Likes

Thanks russoz, fixed.

1 Like

I think I’ve created two agent skills which could be worthwhile adding to this new repository (once it exists) :slightly_smiling_face:

For the upcoming release: community.beszel 1.0.0 by dbrennand · Pull Request #38 · ansible-collections/community.beszel · GitHub release I’m using Claude for various things.

Claude Code commit skill

Adds .claude/skills/commit/SKILL.md — an AI agent skill for creating conventional commits with FQCN scopes for Ansible collection content, with automatic splitting of changed files into separate, focused commits per collection component.

Claude Code changelogs skill

Adds .claude/skills/changelogs/SKILL.md — an AI agent skill for inspecting branch commits, categorising them using antsibull-changelog fragment categories, writing a changelog fragment YAML, and running the changelog generation command.


Both these skills complement one another, I use the commit skill to create a conventional commit formatted commit message which automatically detects the component (e.g., role, plugin, module etc.) being modified. Then the changelogs skill inspects those commit messages and generates the changelog fragment (mapping the commits to the related antsibull-changelog category) and runs uv run antsibull-changelog release -v.

See the changelogs skill in action:

⏺ Skill(changelogs)
  ⎿  Successfully loaded skill

⏺ Searched for 1 pattern, read 2 files (ctrl+o to expand)

⏺ User answered Claude's questions:
  ⎿  · Version 1.0.0 already has a release entry in changelogs/changelog.yaml. Should I remove it and the corresponding fragment file before proceeding? → Yes, remove
      it

⏺ Read 1 file (ctrl+o to expand)

⏺ Update(changelogs/changelog.yaml)
  ⎿  Removed 15 lines
      141      fragments:
      142        - 0.7.1.yml
      143      release_date: '2026-02-23'
      144 -  1.0.0:                                                                                                                                            
      145 -    changes:                                                                                                                                        
      146 -      breaking_changes:                                                                                                                             
      147 -        - community.beszel.agent - Replace auto docker-group detection with configurable                                                            
      148 -          agent_user_groups and agent_docker_host variables                                                                                         
      149 -      bugfixes:                                                                                                                                     
      150 -        - Remove append flag from user creation tasks in agent and hub roles                                                                        
      151 -      minor_changes:                                                                                                                                
      152 -        - community.beszel.hub - Add hub_uid and hub_user_groups variables                                                                          
      153 -      release_summary: 'Release 1.0.0 of the Ansible community collection for Beszel.                                                               
      154 -                                                                                                                                                    
      155 -        '                                                                                                                                           
      156 -    fragments:                                                                                                                                      
      157 -      - 1.0.0.yml                                                                                                                                   
      158 -    release_date: '2026-03-21'                                                                                                                      

⏺ Now let me get the branch name and analyze commits.

⏺ Bash(git rev-parse --abbrev-ref HEAD && echo "---" && git log main..HEAD --oneline && echo "---" && git log main..HEAD --format="%H %s%n%b" && echo "---" && git
      diff…)
  ⎿  1.0.0
     ---
     6fc3795 feat(community.beszel.hub): add variable to configure server host binding (#37)
     … +37 lines (ctrl+o to expand)

⏺ The fragment already exists with updated content. Let me verify it and then run the changelog generation.

⏺ Read 1 file (ctrl+o to expand)

⏺ Fragment looks correct. Running changelog generation:

⏺ Bash(uv run antsibull-changelog release -v)
  ⎿  (No output)

⏺ Searched for 3 patterns, read 1 file (ctrl+o to expand)

⏺ Both files are updated correctly:

  - CHANGELOG.md — has v1.0.0 section with Minor Changes, Breaking Changes, and Bugfixes
  - CHANGELOG.rst — has v1.0.0 section confirmed

  The changelog now includes hub_bind_address as a minor_changes entry under community.beszel.hub, alongside the previously existing entries. The stale
  hub-server-host-variable.yml fragment has been deleted.

The commit skill in action:

❯ Looks good, commit and push this changelog update.                                                                                                                  

⏺ Skill(commit)
  ⎿  Successfully loaded skill

⏺ Bash(git status --short)
  ⎿   M CHANGELOG.md
      M CHANGELOG.rst
      M changelogs/changelog.yaml
     … +2 lines (ctrl+o to expand)

⏺ All files are collection-level (changelogs). Let me check the diff to infer the commit type.

⏺ Bash(git diff HEAD -- CHANGELOG.md CHANGELOG.rst changelogs/changelog.yaml changelogs/fragments/1.0.0.yml changelogs/fragments/hub-server-host-variable.yml)
  ⎿  diff --git a/CHANGELOG.md b/CHANGELOG.md
     index 0a1a048..27cf360 100644
     --- a/CHANGELOG.md
     … +47 lines (ctrl+o to expand)

⏺ User answered Claude's questions:
  ⎿  · Proposed commit for collection-level changelog files:

     ```
     [skip ci]chore: merge hub-server-host-variable fragment into 1.0.0 and regenerate changelogs

     Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
     ```

     Approve, or provide an edited message? → Approve

⏺ Bash(git add CHANGELOG.md CHANGELOG.rst changelogs/changelog.yaml changelogs/fragments/1.0.0.yml changelogs/fragments/hub-server-host-variable.yml)
  ⎿  (No output)

⏺ Bash(git commit -m "$(cat <<'EOF'
      [skip ci]chore: merge hub-server-host-variable fragment into 1.0.0 and regenerate changelogs…)
  ⎿  trim trailing whitespace.................................................Passed
     fix end of files.........................................................Passed
     check for added large files..............................................Passed
     … +6 lines (ctrl+o to expand)

⏺ Bash(git log --oneline -1)
  ⎿  078f60a [skip ci]chore: merge hub-server-host-variable fragment into 1.0.0 and regenerate changelogs

⏺ Committed. Now pushing:

⏺ Bash(git push)
  ⎿  To github.com:ansible-collections/community.beszel.git
        6fc3795..078f60a  1.0.0 -> 1.0.0

⏺ Pushed. 078f60a is now on 1.0.0 at origin

@dbrennand seeing the commit skill, that adds Co-Authored to commits, I couldn’t help thinking: what about the signed commits? So, not to hijack this topic, but are signatures required for co-authoring?

Hey @russoz - Yes, good point, the signed commit changes introduced in IMPORTANT: github.com/ansible now requires signed commits, and eventually to be introduced to ansible-community and ansible-collections orgs (at some point), let’s discuss further on that thread, I believe it was already brought up by @felixfontein.

1 Like

So this is what I’ve been up to recently:

I’ve ended up with various skills and commands, although the commands could probably become skills:

Code Quality:

  • format - Run black code formatting
  • lint - Run linting checks for code quality
  • unit-tests - Run unit tests with tox (newest or oldest versions)
  • sanity-tests - Run ansible-test sanity checks

Workflow:

  • precommit - Pre-commit workflow: format → lint → unit tests
  • prepush - Pre-push workflow: changelog check → format → lint → unit tests (oldest + newest) → sanity tests
  • create-pr - Full PR creation workflow: verify branch → check changelog → run prepush → push to remote → gather PR info → create draft PR → apply labels → update changelog fragments

Documentation:

  • next-version - Determine the next version number for version_added tags

Commands

  • /check-actions - Check GitHub Actions status for current branch and analyse failures
  • /check-sonar - Fetch and analyse SonarCloud issues for the current PR

A general question, should we not just extend the existing awesome list with an AI subheading? GitHub - ansible-community/awesome-ansible: Awesome Ansible List · GitHub As creating another list would cause fragmentation I feel.

OK, so what should we name this repo?

@dbrennand Both will be useful

@tremble That looks great. I hadn’t seen any automation for Sonar Cloud before.

@dbrennand That’s fair. We can start there and have links back and forth

Hi, have you thought about organizing your skills in a way that makes them compatible with Lola, which acts as a package manager for AI skills, prompts and workflows?

5 Likes

Woah! TIL about Lola! Thanks for sharing @alinabuzachis

1 Like

Apparently, and this seems to be strictly a Claude construct, we can bundle up skills, commands, rules, etc.. into plugins. And then one can create their own plugin marketplace (which seems to be a glorified catalog of plugins) to distribute them. Akin to packages and repos, as it seems.

I reckon that would be perfect for publishing our artefacts - the “ansible-community plugin” for Claude or something along those lines.

This repo has some interesting pointers and best practices.


BTW, I have started using Claude a bit more frequently in c.g. and it has helped closing issues pending for over 6 years. We have 760+ issues open in there, and historically that number has only gone up, so I really think this is useful for us. Of course it won’t get things right 100% of the time, but then again, neither do us. It can try more often than us, that’s for sure.

1 Like

@russoz I’m really interested to hear how you’ve approached this, what’s worked and what hasn’t. What prompts have worked, etc

When you get a chance, could you please start a separate Forum Thread on this.

1 Like

This is exactly my experience creating ansible content (inluding plugins) with LLMs - high end models get it close to 100% of the time. Recently I’ve tried 9 billion parameters model to fix ansible lint issues - it did it without any problems. And 9B is very small model. 3B model failed, but it might be due to my approach, not model limitations.

1 Like

Hey @russoz, I actually started with an individual skill and moved to a marketplace to distribute mine once I started adding additional skills. It also helps with keeping them updated.

@gundalow already posted my ansible skills repo above, but here is the marketplace file: claude-ansible-skills/.claude-plugin/marketplace.json at main · leogallego/claude-ansible-skills · GitHub

Which currently includes these skills:

  1. ansible-cop-review (this could be rebranded as “ansible-good-practices”, as cop is from Red Hat’s community of practice)
  2. ansible-scaffold-collection
  3. ansible-scaffold-role
  4. ansible-scaffold-ee
  5. ansible-zen

This should make it easy to install with just a few commands inside Claude Code:

/plugin marketplace add https://github.com/leogallego/claude-ansible-skills
/plugin install ansible-cop-review
/plugin install ansible-scaffold-role
/plugin install ansible-scaffold-collection
/plugin install ansible-scaffold-ee
/plugin install ansible-zen

I already got a PR for the ansible-zen one to improve how the references are loaded and lower the context use, I will be merging that soon and applying to the others this week if I can. They should get autoupdated once I merge, and hopefully Claude will show there is an update pending or apply them automatically.

The problem with this approach is that it’s Claude centred. I really liked LOLA, (what @alinabuzachis shared) to accommodate the multi-assistant approach we were looking for. I will give LOLA a try to make my skills work with Cursor and see what happens.

There is https://skills.sh/ that allows to make skills portable to different harnesses.

1 Like