Paolo Bonzini <[email protected]> writes:

> Using phrasing from https://openinfra.org/legal/ai-policy (with just
> "commit" replaced by "submission", because we do not submit changes
> as commits but rather emails), clarify that the maintainer who bestows
> their blessing on the AI-generated contribution is not responsible
> for its copyright or license status beyond what is required by the
> Developer's Certificate of Origin.
>
> [This is not my preferred phrasing.  I would prefer something lighter
> like "the "Signed-off-by" label in the contribution gives the author
> responsibility".  But for the sake of not reinventing the wheel I am
> keeping the exact works from the OpenInfra policy.]
>
> Signed-off-by: Paolo Bonzini <[email protected]>
> ---
>  docs/devel/code-provenance.rst | 5 +++++
>  1 file changed, 5 insertions(+)
>
> diff --git a/docs/devel/code-provenance.rst b/docs/devel/code-provenance.rst
> index d435ab145cf..a5838f63649 100644
> --- a/docs/devel/code-provenance.rst
> +++ b/docs/devel/code-provenance.rst
> @@ -334,6 +334,11 @@ training model and code, to the satisfaction of the 
> project maintainers.
>  Maintainers are not allow to grant an exception on their own patch
>  submissions.
>  
> +Even after an exception is granted, the "Signed-off-by" label in the
> +contribution is a statement that the author takes responsibility for the
> +entire contents of the submission, including any parts that were generated
> +or assisted by AI tools or other tools.
> +

I quite like the LLVM wording which makes expectations clear to the
submitter:

  While the LLVM project has a liberal policy on AI tool use, contributors
  are considered responsible for their contributions. We encourage
  contributors to review all generated code before sending it for review
  to verify its correctness and to understand it so that they can answer
  questions during code review. Reviewing and maintaining generated code
  that the original contributor does not understand is not a good use of
  limited project resources.

It could perhaps be even stronger (must rather than encourage). The key
point to emphasise is we don't want submissions the user of the
generative AI doesn't understand.

While we don't see them because our github lockdown policy auto-closes
PRs we are already seeing a growth in submissions where the authors seem
to have YOLO'd the code generator without really understanding the
changes.

>  Examples of tools impacted by this policy includes GitHub's CoPilot, OpenAI's
>  ChatGPT, Anthropic's Claude, and Meta's Code Llama, and code/content
>  generation agents which are built on top of such tools.

-- 
Alex Bennée
Virtualisation Tech Lead @ Linaro

Reply via email to