On Tue, Nov 11, 2025 at 09:35:18AM +0000, Lorenzo Stoakes wrote:
>
> Now 'any idiot' can fire off hundreds of patches that look at a glance as
> if they might have some validiity.
>
> The asymmetry of this is VERY concerning.
>
> I also hate that we have to think about it, but the second the press put
> out 'the kernel accepts AI patches now!' - and trust me THEY WILL - we are
> likely to see an influx like this that maintainers will have to deal with.
Yeah, that's an argument for not requiring any kind of AI tagging.
One of my concerns is that there's no guarantee that people flooding
the kernel with AI slop won't disclose that they used an LLM.
> 1. Maintains MUST have the ability to JUST SAY NO, go away _en-masse_ to
> regain symmetry on this.
Maintainers do have this already. There are certain people who are
known to be sending low priority patches, and people just quietly
ignore those patches.
The risk of AI slop is that this will just happen a *lot* more often,
which means that patches from known high quality controllers will get
far more attention than patches from newer contributors --- because we
won't know whether it's a new contributor who is coming up to speed,
or someone who is sending AI slop. So the more AI slop we get, the
more this dynamic will accelerate, to the point where people who
accuse us of having an old "boys/girls" club will become true, and
people will accuse us of not being welcoming to new contributors.
There *will* be a solution to the symmetry; so I wouldn't consider it
"unworkable". It's just that we (and especially newcomers) might not
like the solution that naturally comes out of it. As you put it,
"throw out the baby with the bath water"; the system will survive, but
it might suck to be the baby.
> 2. Those who submit patches MUST UNDERSTAND EVERY PART OF IT.
>
> 'that which can be proposed without understanding can be dismissed without
> understanding'.
Yeah, it might be that all we can do is to say that people who use
LLM's without understanding all parts of it, my result in their
blackening their reputation, wiht the result that *all* their patches
might get ignored.
And we can warn that if a company has many of its employees sending
lower quality contributions, maintainers might decide to address the
denial of service attack by ignoring *all* patches from a particular
company / domain. We've done this before, with the University of
Minnesota, due to gross abuse leading to lack of trust of the
institution.
Hopefully things won't come to that, but maybe explicitly warning
people that *is* a possibility might be useful as a deterrent factor.
And I think it's important to say that it's low quality contributions
from AI is no different from any other kind of low quality
contributions. And just as judges in courts of law have sanctioned
lawyers for submitting legal briefs that contained completely
hallucinated court cases, there will be costs to sending cr*p no
matter what the source.
> I think as long as we UNDERLINE these points I think we're good.
>
> TL;DR: we won't take slop.
Agreed, completely.
- Ted