Ihor Radchenko <[email protected]> writes:
>> So if we accept code that the submitting human does not understand, we
>> can reach a state that’s unmaintainable for humans.
>
> I think that this point is the most important.
> You somehow assume that we will be accepting code that humans cannot
> understand. We will not.

I wouldn’t phrase it as "cannot" but at "do not".

As long as we expect the contributor to understand the code they submit,
I don’t worry too much.

I mostly worry that contibutors will stop reading the code and expect
others to review code the contributors never read themselves (or wave it
through "because AI").

Because by now that’s what everyone I know personally who uses AI has
ended up doing. Even the one who I thought didn’t do that.

Or if people start saying “let AI do a pre-review” -- that just means to
force contributors to read AI output. If I as reviewer don’t want to
read unchecked AI output, I shouldn’t force contributors to read such
either.

Best wishes,
Arne
-- 
Unpolitisch sein
heißt politisch sein,
ohne es zu merken.
https://www.draketo.de

Attachment: signature.asc
Description: PGP signature

Reply via email to