On 2025-01-12 at 18:03 +0000, Andrew M.A. Cater wrote:
> Watching other people find and fix bugs, even in code they
> have written or know well, I can't trust systems built on modern
> Markov chains to do better, no matter how much input you give them, and
> that's without crediting LLMs as able to create good novel code..

This is something I have thought before, and which I find lacking in
most (all?) instances of the "let's program with an LLM" topic.

When a human¹ programs something, I expect there is a logical process
through which he arrives to the decision to write a set of lines of
code. This doesn't mean those lines will be the right ones, or bug-
free. Just that it makes sense.

For example, a program that does chdir("/"); at the beginning may
suggest it my run as a daemon, as this allows it not to block
filesystems from umounting.
If it has a number of calls to getuid(), setuid(), setresuid()... it
might switch to a different user.

However, if the code was generated by a LLM, all bets are off, since
the lines could make no sense at all for this specific program.


It wouldn't be that strange if a LLM asked to generate a control file
for a perl module could suggest a line such as
  Depends: libc6 (>= 2.34)
just because there are lots of packages with that dependency.²

A person could make a similar mistake of including unnecessary
dependencies if copying its work on an unrelated package, if not
properly cleaned. But how to fix those things if the mentor is a LLM?



¹ somewhat competent as a programmer
² hopefully, a LLM wouldn't be trained on the *output* of the
templates, though.


Reply via email to