* David Masterson <[email protected]> [2026-03-25 08:12]:
> Jean Louis <[email protected]> writes:
> 
> > Let's stop pretending that copyright assignment and human‑only
> > authorship are essential to freedom. They were tactics, not
> > principles. If we can now produce more free software with less legal
> > overhead, using tools we control, that's a win—not a threat.
> 
> The issue, though, that I think Ihor has raised elsewhere is that the
> code (patch) generated by the LLM, copyrighted or not, may be so "dense"
> as to be beyond easy human understanding.  Therefore, if it is accepted,
> that is a potential loss for free software as it can lead to eventual
> humans lazily accepting the LLM code without understanding it and
> leading to future problems.

I understan your idea, but knowing what very free LLMs can do on my
computer, even if running solely on the CPU, even the smallest model
can help you understand the code, whatever it is.

If something is beyond human understanding, literally, then why would
it be accepted in any kind of society? That makes no sense.

Managers of Org mode can always make their guidance.

Why don't you try using opencode and tell to it, to make smaller
patch, with good human understanding? That is all what it
takes. Naturally expressing yourself on what you want to do.

I keep a prompt in directory, and I just point to the prompt, and
models they do it. I do not need to repeat myself, I can even just say
"write the final article" for the model to do what is already known
that I want.

> A solution to this problem may be to only accept LLM generated code
> that is produced in a Literate Programming fashion.

LLM is just literate programming turbo-powered!

"Qwen3.5-0.8B-UD-Q8_K_XL.gguf" runs with 245 tokens per second, and
explains me Emacs Lisp functions in basically a second, or two.

And what about 4B, or Coder Next... I have just mentioned the worst
model.

Is there really need for literate programming today?

The core philosophy is more relevant than ever, but its traditional
tools and practices are being radically transformed, bypassed, or
absorbed into new workflows.

The arrival of LLMs has created a fascinating new dynamic. Instead of
reviving classic tools like noweb, LLMs are solving the same problems
in a fundamentally new way, creating three key paradoxes.

We are likely entering a third era where the tools will explicitly
merge literate programming with LLM workflows.

Tools will emerge where you write a specification in a hybrid
markdown/code file, and an LLM acts as the tinkerer, generating
production-ready code in real-time as you write the documentation. The
above is already being done all over the world, software is produced
by speed hundreds times faster than ever before.

How about Executable Documentation: The ultimate realization of
Knuth’s vision—where the documentation is the executable, not because
of a macro pre-processor, but because an LLM can interpret the
documentation and run the code in a simulated or containerized
environment.

----------------------
begin of sample output
----------------------

This Lisp function, wrs-rsync-area-contents, is a tool designed to 
automatically copy files from a local computer to a remote server. It's likely 
part of a larger web development or server management system (likely called 
"WRS").

Here is a plain-English explanation of how it works:
1. The Goal

Its job is to sync the files inside a specific project folder (the
"Area") to a remote server.
---SNIP---

----------------------
 end of sample output
----------------------

I can explain any word, any function, within Emacs, and that is what
many Emacs users using LLMs already do.

Do we need literate programming any more? Not any more in traditional
sense.

Do we need manuals? They can now be produced in record time. Even
personalized manuals can be created using tools like Opencode, which
can run, understand your persona, your language, and your level of
understanding.

> The LLM should be able to do a deep dive on explaining the detailed
> requirements that led to the code, the breakdown of the requirements
> that went into the code blocks, and how each code block is supposed
> to function.

Yes? Nothing new to me. That is exactly what LLM can do. Yesterday I
have taken totally unknown library to me, and I said what machine
parts I want designed in parametric way, it runs in the loop
(opencode) and I let the objects be approved by FreeCAD, and each time
I tell to the "agent" what has to be moved, which direction, and I
finally get the rotor of machine finished, I can give it to the CNC
cutter to get the parts done.

During that time, it used the Python library build123d, and has deeply
delved into it, and can explain how requirements led to the code, and
all that you say.

That is daily routine for many of us using it.

> Of course, this would require putting the proper tooling into GNU
> software to deal with literate programs or patches, I don't think
> that would be very difficult, but others would have to speak to
> that.
> 
> Would this be possible given today's technology?

Yes, it is possible—I am already doing it daily with local models
using llama.cpp running entirely on CPU, with no proprietary
dependencies. The tooling required is minimal: a text editor,
llama.cpp, and a directory of prompt files or list of database
entries. This runs on standard GNU/Linux systems today.

-- 
Jean Louis

Reply via email to