On Mon, Mar 30, 2026 at 02:41:09AM +0300, Jean Louis wrote:
> On 2026-03-29 11:26, [email protected] wrote:
> > I think this example shows pretty well where the lie is in the
> > current wave of AI. It's not the "hallucinations", it is the
> > fact that they are wired to "talk" to us as if they knew what
> > they're doing.
> 
> The assertion that AI systems are inherently deceptive due to their
> conversational design—particularly the perception that they "know" what they
> are saying—is a common but misinformed critique. This perspective conflates
> the output behavior of large language models (LLMs) with intent or
> truthfulness, which are attributes of human cognition, not machine-generated
> text.

You don't need to explain to me what LLMs are, thankyouverymuch. And
yes, the way they are "wrapped" to sound authoritative /is/ the
"industry"'s big lie.

Read on priming (in that wonderful "Thinking, Fast and Slow" by
Daniel Kahneman) to know why that works.

They are desperate to generate sufficient cash flow before their
bubble bursts -- the situation is not that different from 2000s
dotcom (just two orders of magnitude bigger). They will kill for
it.

Cheers
-- 
tomás

Reply via email to