On 2026-03-29 13:11, Björn Kettunen wrote:
Who Does That Server Really Serve? - GNU Project - Free Software Foundation:
https://www.gnu.org/philosophy/who-does-that-server-really-serve.html

The GNU philosophy piece "Who Does That Server Really Serve?" warns
against exactly the kind of dependency you're describing—but it also
assumes a world where users are forced to interact with software as a
service. That assumption is increasingly outdated. Today, I can run
Qwen, Llama, DeepSeek, or any number of open‑weight models entirely
locally on my own hardware. Hugging Face, Allen AI, IBM, Apertus, and
others are making this the norm. When I generate code, it's on my
machine, with models that are publicly available, often under
permissive or free software licenses. The "proprietary service"
framing doesn't apply when the user controls the tool end‑to‑end.

The assumption isn't so outdate when users predominantly interact with
SAS LLM's such as Claude.
Sidenote: We should not call them AI but LLM. The former obfuscates
what these actually are.

If you mean Allen AI I have mentioned, that is name of the company.

An LLM (Large Language Model) is fundamentally a statistical model — a massive set of learned parameters (weights) that predict the next token in a sequence based on patterns in its training data.

The GNU project (and people like Richard Stallman) are correct to push back against casually calling a raw LLM "artificial intelligence." It can feel like marketing hype that overstates what it is. A bare model is more like a very advanced lookup/completion tool than true intelligence.

A single LLM call (prompt → output) is limited and often brittle.

But when you wrap it in agentic workflows, tool use, memory, planning loops, multi-step reasoning, self-correction, external tools (search, code execution, calculators, APIs), etc., the overall system can exhibit behaviors that reasonably match many classical and modern definitions of "artificial intelligence."

So there are you are, try it out, you will understand it. This is why companies and researchers increasingly talk about LLM-based AI systems or AI agents rather than just "the LLM." The model is the core engine, but the surrounding architecture is what makes it intelligent in practice.

--
Jean Louis

---
via emacs-tangents mailing list 
(https://lists.gnu.org/mailman/listinfo/emacs-tangents)
  • ... Jean Louis
    • ... Christopher Dimech via Emacs news and miscellaneous discussions outside the scope of other Emacs mailing lists

Reply via email to