Should the Ansible project have a AI policy?
Some Free / Open Source projects have AI polices, for example Servo has:
AI contributions
Contributions must not include content generated by large language models or other probabilistic tools, including but not limited to Copilot or ChatGPT. This policy covers code, documentation, pull requests, issues, comments, and any other contributions to the Servo project.
For now, we’re taking a cautious approach to these tools due to their effects — both unknown and observed — on project health and maintenance burden. This field is evolving quickly, so we are open to revising this policy at a later date, given proposals for particular tools that mitigate these effects. Our rationale is as follows:
Maintainer burden: Reviewers depend on contributors to write and test their code before submitting it. We have found that these tools make it easy to generate large amounts of plausible-looking code that the contributor does not understand, is often untested, and does not function properly. This is a drain on the (already limited) time and energy of our reviewers.
Correctness and security: Even when code generated by AI tools does seem to function, there is no guarantee that it is correct, and no indication of what security implications it may have. A web browser engine is built to run in hostile execution environments, so all code must take into account potential security issues. Contributors play a large role in considering these issues when creating contributions, something that we cannot trust an AI tool to do.
Copyright issues: Publicly available models are trained on copyrighted content, both accidentally and intentionally, and their output often includes that content verbatim. Since the legality of this is uncertain, these contributions may violate the licenses of copyrighted works.
Ethical issues: AI tools require an unreasonable amount of energy and water to build and operate, their models are built with heavily exploited workers in unacceptable working conditions, and they are being used to undermine labor and justify layoffs. These are harms that we do not want to perpetuate, even if only indirectly.
Note that aside from generating code or other contributions, AI tools can sometimes answer questions you may have about Servo, but we’ve found that these answers are often incorrect or very misleading.
In general, do not assume that AI tools are a source of truth regarding how Servo works. Consider asking your questions on Zulip instead.
There has recently been a discussion about this on a GitHub issue.
Personally I’d support the Ansible Project adopting something like this, has / could / should this be discussed by the steering committee?