* [email protected] <[email protected]> [2026-03-28 14:39]:
> On Sat, Mar 28, 2026 at 02:08:31PM +0300, Jean Louis wrote:
> 
> [...]
> 
> > Yeah, you're not wrong—LLMs will absolutely bullshit eloquently if you
> > let them. The literate docs don't magically fix that.
> 
> [...]
> 
> The (AFAICS) unsolved problem is that there is no way to be
> sure that the (eloquent) text corresponds to the code. If
> not, it would be highly counterproductive.

How does the uncertainty of machine-generated code compare to the
uncertainty of human-written code?

I suggest you try it. 

When you write your code, you have no certainty until you get it by
trying it out, testing it, seeing the functionality.

Any code not inspected, not used, is not productive, no matter if it
works or not.

There is way to be sure that code works, and that is by testing and
seeing it functions.

I suggest you install opencode and try it out, as once you go through
the process, you will soon achieve the state of certainty.

OpenCode | The open source AI coding agent:
https://opencode.ai/

Instead of editing, Build mode, you can try it out with Plan, so give
the task and see what you get. Certainty you get by testing and seeing
functionality. It is individual human state. 

LLM model may get certainty, but for human may not be. Human may get
certainty, while it is not for model.

True certainty can be obtained by testing functions and seeing if they
are doing what is meant to be.

-- 
Jean Louis

---
via emacs-tangents mailing list 
(https://lists.gnu.org/mailman/listinfo/emacs-tangents)

Reply via email to