On 9/22/25 15:24, Daniel P. Berrangé wrote:
FWIW, I considered that the "exception process" would end up
being something like...
* someone wants to use a particular tool for something they
believe is compelling
* they complain on qemu-devel that our policy blocks their
valid use
* we debate it
I guess we're here, except for hiding the complaint behind a patch. :)
* if agreed, we add a note to this code-proveance.rst doc to
allow it
I would imagine that exceptions might fall into two buckets
* Descriptions of techniques/scenarios for using tools
that limit the licensing risk
* Details of specific tools (or more likely models) that
are judged to have limited licensing risk
it is hard to predict the future though, so this might be
too simplistic. Time will tell when someone starts the
debate...
Yeah, I'm afraid it is; allowing specific tools might not be feasible,
as the scope of "allow Claude Code" or "allow cut and paste for ChatGPT
chats" is obviously way too large. Allowing some usage scenarios seems
more feasible (as done in patch 4).
What is missing: do we want a formal way to identify commits for which an
exception to the AI policy was granted? The common way to do so seems to
be "Generated-by" or "Assisted-by" but I don't want to turn commit message
into an ad space. I would lean more towards something like
AI-exception-granted-by: Mary Maintainer <[email protected]>
IMHO the code-provenance.rst doc is what grants the exception, not
any individual person, nor any individual commit.
Whether we want to reference that a given commit is relying on an
exception or not is hard to say at this point as we don't know what
any exception would be like.
Ideally the applicability of an exception could be self-evident
from the commit. Realiyt might be more fuzzy. So if self-evident,
then it likely warrants a sentence two of english text in the
commit to justify its applicability.
IOW, a tag like AI-exception-granted-by doesn't feel like it is
particularly useful.
I meant it as more of an audit trail, especially for the case where a
new submaintainer would prefer to ask someone else, or for the case of a
maintainer contributing AI-generated code. If we can keep it simple and
avoid this, that's fine (it's not even in the policy, only in the commit
message).
What I do *not* want is Generated-by or Assisted-by. The exact model or
tool should matter in deciding whether a contribution fits the
exception. Companies tell their employees "you can use this model
because we have an indemnification contract in place", but I don't think
we should care about what contracts they have---we have no way to check
if it's true or if the indemnification extends to QEMU, for example.
**Current QEMU project policy is to DECLINE any contributions which are
believed to include or derive from AI generated content. This includes
- ChatGPT, Claude, Copilot, Llama and similar tools.**
+ ChatGPT, Claude, Copilot, Llama and similar tools. Exceptions may be
+ requested on a case-by-case basis.**
I'm not sure what you mean by 'case-by-case basis' ? I certainly don't
think we should entertain debating use of AI in individual patch series,
as that'll be a never ending burden on reviewer/maintainer resources.
Exceptions should be things that can be applied somewhat generically to
tools, or models or usage scenarios IMHO.
I meant that at some point a human will have to agree that it fits the
exception, but yeah it is not the right place to say that.
I would suggest only this last paragraph be changed
This policy may evolve as AI tools mature and the legal situation is
clarifed.
Exceptions
----------
The QEMU project welcomes discussion on any exceptions to this policy,
or more general revisions. This can be done by contacting the qemu-devel
mailing list with details of a proposed tool / model / usage scenario /
etc that is beneficial to QEMU, while still mitigating the legal risks
to the project.
After discussion, any exceptions that can be relied upon in contributions
will be listed below. The listing of an exception does not remove the
need for contributors to comply with all other pre-existing contribution
requirements, including DCO signoff.
This sounds good (I'd like to keep the requirement that maintainers ask
for a second opinion when contributing AI-generated code, but that can
be weaved into your proposal). Another benefit is that this phrasing is
independent of the existence of any exceptions.
I'll split the first three patches into its own non-RFC series, and we
can keep discussing the "refactoring scenario" in this thread.
Paolo