On Sun, Mar 29, 2026 at 12:15:33PM +0300, Jean Louis wrote:
> * [email protected] <[email protected]> [2026-03-28 14:39]:
> > On Sat, Mar 28, 2026 at 02:08:31PM +0300, Jean Louis wrote:
> > 
> > [...]
> > 
> > > Yeah, you're not wrong—LLMs will absolutely bullshit eloquently if you
> > > let them. The literate docs don't magically fix that.
> > 
> > [...]
> > 
> > The (AFAICS) unsolved problem is that there is no way to be
> > sure that the (eloquent) text corresponds to the code. If
> > not, it would be highly counterproductive.
> 
> How does the uncertainty of machine-generated code compare to the
> uncertainty of human-written code?
> 
> I suggest you try it. 

[...]

I have been debugging other people's code for a while now, and
yes, comments and docs aren't always right (they tend to age,
and since the test suite doesn't reach them...).

Thing is, after a while in the code I tend to have a mental
model (call it "wet LLM" if you fancy) of the typically small
group of persons who were at the code and of the processes
which shaped it. I doubt I'll be able to do the same with
LLM generated code.

I predict: LLM generated code will be more "throwaway".

Cheers
-- 
t

Attachment: signature.asc
Description: PGP signature

---
via emacs-tangents mailing list 
(https://lists.gnu.org/mailman/listinfo/emacs-tangents)

Reply via email to