That’s a good idea in general, although they’re kinda lying about the existing use on the front page: Fix "used by" overestimate on the web pages by webknjaz · Pull Request #30 · openai/agents.md · GitHub
I pointed Claude at my gist This Gist collects opinions and pointers on fighting LLM spam on GitHub · GitHub (with links to all discussions/policies I’ve seen) in research mode and it started scanning 357 sources.
This is what it composed: https://claude.ai/public/artifacts/de2ffa40-98dc-4458-b1b5-f3e645bf7840.
It’s rather comprehensive (and a bit long). But, there’s a policy template and coping strategies inside. I shared this on the internal Slack and @gundalow asked me to re-post here.
Fedora’s AI has been published now:
Seems reasonable to me. There would benefits to align with that for people who contribute to both projects.
People are trying to align on stuff @ wg-ai-alignment/moderation at main · chaoss/wg-ai-alignment · GitHub.
I’ve also started a discussion in pip-tools as I’m looking to come up with something to be copied into many projects I’m involved in: [policy] Good-faith agentic contributions and LLM use, and avoiding death by a thousand AI slops · jazzband/pip-tools · Discussion #2278 · GitHub.
UPD: I’m happy to announce that we’ve accepted the initial policy in pip-tools 3 days ago, it’s a part of the contrib guide — Contributing - pip-tools documentation v7.5.4.dev32. The corresponding PR (in addition to the discussion I posted earlier) holds some interesting context regarding the decisions agreed upon and improvements postponed over the course of the debate: https://github.com/jazzband/pip-tools/pull/2318.
P.S. This forum post has also been mentioned in the article that attempts to visualize how different projects handle slop: The Generative AI Policy Landscape in Open Source – console.log().