On 2/19/26 2:23 PM, Rodrigo Vivi wrote:
> On Wed, Feb 11, 2026 at 03:24:49PM -0500, Chris Mason wrote:
>>
>>
>> On 2/11/26 3:05 PM, Dave Airlie wrote:
>>> On Thu, 12 Feb 2026 at 06:02, Chris Mason <[email protected]> wrote:
>>>>
>>
>> [ ... ]
>>
>>>>> This is also just an experiment to see what might stick, it might
>>>>> disappear at any time, and it probably needs a lot of tuning.
>>>>
>>>> The output is pretty different from netdev/bpf:
>>>>
>>>> https://lore.kernel.org/bpf/?q=AI+reviewed+your+patch  
>>>>
>>>> Which might be what you want so it's fine of course.  But it looks like
>>>> it didn't actually go through the report generation from the review
>>>> prompts, so I'm worried it didn't use the rest of the prompts either.
>>>>
>>>> My stuff should be creating a review-inline.txt which is the lkml
>>>> formatted review.
>>>>
>>>> I'm happy to try things out here if it'll help.
>>>
>>> My plan over the next few days is to refine the code to make sure it's
>>> doing this, my prompt asks it to load the patch and the kernel
>>> prompts, then do a review across the series and individual patches,
>>>
>>> I'm guessing some of the results aren't making it back out the other side.
>>
>> I had to change the prompts a bit, I think my original instructions were:
>>
>> "read prompt xyz and patch abc, review the patch"
>>
>> But sometimes claude would read the prompt and the patch and then follow
>> it's own review protocol instead of mine.  The current /kreview slash
>> command is a lot more reliable:
>>
>> Read the prompt <path to prompts dir>/kernel/review-core.md
>>
>> If a git range is provided, it's meant for the false-positive-guide.md
>> section
>>
>> Using the prompt, do a deep dive regression analysis of the top commit,
>> or the provided patch/commit
> 
> Chris, first of all congrats on this work. I definitely loved the results
> I've seen so far.
> 
> I hope my question doesn't bring here the old LLM discussions. But based
> on the old discussions and people afraid of AI slops in the Linux Kernel
> and the potential increase of noise in the review processes, I got myself
> wondering if it would be possible to add in your tool some prompt to flag
> if the patch/series is a potential AI Slop.

Alexei had asked for something similar, so I did put a few lines into
the prompts for it, but I haven't spent a lot of time defining the
signals that might get used to detect AI generated patches.

Right now it mostly seems to detect AI generated commit messages, and
while I haven't been paying really close attention, the ones it does
flag don't seem to be better or worse and commits than the rest.

Example, scroll down to the end of this email:

https://lore.kernel.org/all/2ddebc81fe2a7d80441d6cf3d27bf6973a4d0a233d6fdbb332d09700775d7...@mail.kernel.org/

There's review metadata at the bottom, you can search for
"AI-authorship-score: medium"

I didn't find any "high" in a really quick search.

If you have a couple of clear examples, I'm happy to try and build out
the rules to better catch them.  I'd like to focus on bugs instead of
the slop part, but one does tend to lead to the other.

-chris

Reply via email to