> Sent: Monday, March 30, 2026 at 11:58 AM
> From: "Jean Louis" <[email protected]>
> To: [email protected]
> Cc: [email protected]
> Subject: Re: [emacs-tangents] Literate LLM programming? [Re: Is org-mode 
> accepting AI-assisted babel ob- code updates?]
>
> On 2026-03-29 11:26, [email protected] wrote:
> > I think this example shows pretty well where the lie is in the
> > current wave of AI. It's not the "hallucinations", it is the
> > fact that they are wired to "talk" to us as if they knew what
> > they're doing.
> 
> Of course.
> 
> The assertion that AI systems are inherently deceptive due to their 
> conversational design—particularly the perception that they "know" what 
> they are saying—is a common but misinformed critique. This perspective 
> conflates the output behavior of large language models (LLMs) with 
> intent or truthfulness, which are attributes of human cognition, not 
> machine-generated text.
> 
> - LLMs are statistical models trained on vast corpora of text data.
> - They generate responses based on patterns in training data, not on 
> understanding, intent, or factual verification.
> - The ability to "talk" coherently is a feature of their architecture, 
> not evidence of knowledge or deception.
> 
> - The accuracy of LLM outputs is fundamentally determined by the 
> quality, relevance, and bias of the data used in training.
> - Organizations that curate and train models bear responsibility for 
> data selection and curation.
> - Misinformation or biased outputs stem from training data that reflects 
> historical, societal, or editorial biases—not from the model’s inherent 
> nature

They're inherently biased by their training data - at their most fundamental 
level.  And the reason why so much training data is required.  If something 
is properly understood, one can simply get to the solution without the 
statistical
analysis part - the analysis part will simply disappear.  

 
> - Humans are also prone to misinformation, cognitive biases, and 
> propaganda—often internalizing false narratives through repeated 
> exposure.
> - The prevalence of propaganda in media, politics, and education 
> demonstrates that humans are not inherently more truthful or discerning 
> than AI systems.
> - The difference lies in transparency: humans often believe they are 
> reasoning objectively, while LLMs generate responses without 
> self-awareness.
> 
> The reliability of LLM outputs depends entirely on how they are 
> deployed:
> 
> - Unfiltered chatbots may generate plausible but false content.
> 
> - Engineering-grade applications (e.g., mine safety protocols, 
> geological modeling) use LLMs as assistants within verified workflows, 
> with outputs cross-checked against authoritative sources.
> 
> The idea that LLMs are "lying" because they speak confidently is a 
> misattribution of human traits to machines. The real issue lies in how 
> these tools are used, not in their design. When properly programmed, 
> integrated, and monitored, LLMs are powerful aids—not sources of 
> deception. The responsibility for accuracy remains with the human 
> operators and data curators.
> 
> -- 
> Jean Louis
> 
> ---
> via emacs-tangents mailing list 
> (https://lists.gnu.org/mailman/listinfo/emacs-tangents)
>

---
via emacs-tangents mailing list 
(https://lists.gnu.org/mailman/listinfo/emacs-tangents)
            • ... Dr. Arne Babenhauserheide
              • ... Jean Louis
              • ... Dr. Arne Babenhauserheide
              • ... Jean Louis
              • ... Dr. Arne Babenhauserheide
              • ... Christopher Dimech via Emacs news and miscellaneous discussions outside the scope of other Emacs mailing lists
        • ... tomas
          • ... Jean Louis
  • ... Jean Louis
    • ... Christopher Dimech via Emacs news and miscellaneous discussions outside the scope of other Emacs mailing lists

Reply via email to