Hi, Just a few thoughts from me.
It is important to understand how large language models (LLMs) actually work. At a high level, an LLM is a prediction system: it generates the most statistically likely next piece of text based on patterns learned from its training data. While web-search tools can be added to retrieve recent information, the model is still fundamentally summarizing and recombining existing content. Any output must therefore be independently evaluated. This underlying mechanism also explains why LLMs struggle with complex tasks, particularly in advanced programming and scientific problem-solving. In my own experience, while using one of the latest models for code optimization, the system produced confident but incorrect outputs — in other words, hallucinations. When pushed beyond pattern-matching into true reasoning, its limitations became obvious. LLMs cannot replace critical thinking. They are not conscious, do not possess understanding, and do not “think” in the human sense. They generate tokens based on probability distributions, not insight or intent. Suggesting otherwise raises an important philosophical question: is human thinking merely probabilistic? That seems unlikely. We still do not fully understand how the human brain works, and assuming we can faithfully replicate it without that understanding is a significant leap. While LLMs can assist in writing basic applications or accelerating routine tasks, they are not capable of building genuinely complex systems independently. Claims that artificial general intelligence (AGI) is imminent should be treated with skepticism. AGI would imply the ability to reason, understand, and generalize across domains — effectively, to think. Until we understand our own cognition, true simulation of it remains speculative. There is also a real societal risk: overreliance on AI by those who fail to develop or maintain critical thinking skills. Used uncritically, these tools can weaken reasoning rather than enhance it. AI should therefore be used responsibly, as an aid to human judgment — not a substitute for it. Finally, it is worth asking who is driving much of the AI hype, and why. Often, it is large technology companies and their executives, whose incentives include boosting valuations and shaping investor expectations. Fear-based narratives — suggesting people or businesses will be “left behind” — are powerful psychological tools for creating perceived urgency and value. This is precisely why strong critical thinking skills are essential: to separate realistic capabilities from marketing narratives, and to avoid being passively swept up in technological hype. Regards, Dillon -----Original Message----- From: R-help <[email protected]> On Behalf Of Gregg Powell via R-help Sent: Tuesday, 09 December 2025 19:38 To: Robert Knight <[email protected]> Cc: R help project <[email protected]>; Hans W <[email protected]> Subject: Re: [R] Chatbot -generated R Code I did not say blindly trust LLMs nor did I recommend their use. That is up to each individual. Those who choose not to use LLMs will not be competitive against their peers who do - that is my claim. As for me, I use LLMs. I have no axe to grind against using LLMs or those who use them. Honestly, at 58 - I did not think I'd see AI in my lifetime. I see LLMs as a tool. A very useful tool. I would not want to be a younger person having to compete against AI. I am glad to be in a position where AI and its impact on society will have little or no financial impact on me personally. I commiserate with those not in a similar circumstance. I see many taking a supercilious attitude toward those who use AI (as demonstrated in your emails, for instance) - particularly among coders. Ironically, coders are among the first and hardest hit by AI, along with graphic designers, writers, researchers, data scientists... there is a long and growing list. The genie is out of the bottle. Governments are run by people either too greedy or power hungry to curtail the technology. It is the start of a new arms race. Some claim it will help society, other claim it will destroy it. As most things usually go - the truth most probably lies somewhere in the middle. Only time will tell. All the best! Gregg On Tuesday, December 9th, 2025 at 10:06 AM, Robert Knight <[email protected]> wrote: > Responding with LLM output to a question about risk and the legality of > something is not comforting. Naked Capitalism reported on hallucinations are > increasing, not decreasing in language models. > I shall trust my own brain over an LLM output. Are you really suggesting that > people trust an LLM counterview of the meaning contracts they sign? > > This kind of thinking, and that guy who did not understand central tgeblaw of > large numbers , both experts in the field, is why people like me have to work > in other occupations and argue in the public sphere until someone like > Kennedy can get into place. > > An LLM tells me not believe my lying eyes and cognitive understanding of the > contract I am about to sign.... Trust the LLM you say. > > My word. > > On Tuesday, December 9, 2025, Gregg Powell <[email protected]> wrote: > > > Let's let Claude respond back itself: > > > > r/ > > Gregg > > On Tuesday, December 9th, 2025 at 8:39 AM, Robert Knight > > <[email protected]> wrote: > > > > > It seems like malpractice to recommend Claude to someone using R or big > > > data since what they would use it for is *explicitly* against the terms > > > of service. Machine learning predates the microchip. > > > See below. > > > > > > Also, quality control will make a comeback. Expert systems cannot be > > > replaced with something akin to Bayes probability charts indedinitely. > > > > > > > > > > you may not use the service to “develop any products or services that > > > > compete with our Services, including to develop or train any artificial > > > > intelligence or machine learning algorithms or models.” > > > > > > Claude’s terms further state > > > > > > > “Equitable relief. You agree that (a) no adequate remedy exists at law > > > > if you breach Section 3 (Use of Our Services); (b) it would be > > > > difficult to determine the damages resulting from such breach, and any > > > > such breach would cause irreparable harm; and (c) a grant of injunctive > > > > relief provides the best remedy for any such breach. You waive any > > > > opposition to such injunctive relief, as well as any demand that we > > > > prove actual damage or post a bond or other security in connection with > > > > such injunctive relief.” > > > > > > Machine learning includes linear regression. Other Machine > > > Learning algorithms include Logistic Regression, decision trees, > > > random forests, support vector machines, K-Nearest Neighbors, & > > > Bayes Algorithms. It seems to me, that as of 14 October 2024, no > > > one seeking to handle any data science can legitimately use Claude > > > > > > > > > > > > On Tuesday, December 9, 2025, Gregg Powell via R-help > > > <[email protected]> wrote: > > > > > > > Humans who don't adapt to LLMs, or whatever form AI takes as it > > > > evolves, will be left in the dust. > > > > > > > > People may just now be waking up to the fact that we're three years > > > > into a tremendous revolution, one of the greatest in human history. It > > > > follows the Bronze Age, the Iron Age, the Industrial Revolution, the > > > > computer revolution, the Information Age, and now... AI. > > > > > > > > AGI is approaching. How quickly? Who can say. Whether AI can ever be > > > > truly sentient remains a mystery. But once it can adequately replicate > > > > sentience, some will ask: what's the difference? > > > > > > > > As to the question of who judges what's acceptable from a coding > > > > standpoint: capitalism will. Corporations will. And the question of > > > > whether this is the future of coding is already behind us. It is coding > > > > now, and it will only continue to improve in capability. > > > > > > > > Try Replit, Cursor, Claude Code. Humans are incapable of keeping up. AI > > > > still struggles with some of the most complex tasks, and it does poorly > > > > at orchestrating across large repositories, but it's improving rapidly. > > > > Just my observations. > > > > > > > > > > > > Those who look down their noses at all this will be left behind. > > > > > > > > All the best! > > > > Gregg > > > > > > > > > > > > > > > > > > > > On Tuesday, December 9th, 2025 at 6:32 AM, Hans W > > > > <[email protected]> wrote: > > > > > > > > > > > > > > > > > > > > > > > > > > > SORRY if I missed such a discussion somewhere on R-HELP > > > > > > > > > > > > > > For many years I wanted to write an R function that finds the > > > > > closest pair of points among a, maybe huge, set of points on > > > > > the 2-dimensional plane. I never did, perhaps considering the > > > > > possible complexity of this task. > > > > > > > > > > > > > > Now I found a book, among others describing the "sweeping > > > > > algorithm", perfectly suited for the problem. And as a test, I > > > > > questioned chatbots like DeepSeek and ChatGPT about such a function - > > > > > and mentioned the sweeping algorithm. > > > > > > > > > > > > > > DeepSeek, for instance, came immediately up with a complete, > > > > > efficient solution and test cases that I checked with brute > > > > > force. I can see that it utilized the sweeping algorithm, > > > > > documented the code, and set up a help file. I made some > > > > > changes, improved the code a bit, but still it is code generated by a > > > > > clever chatbot, whatever I do. > > > > > > > > > > > > > > Now I ask myself: Is this a correct and lawful way to write code in > > > > > the future? > > > > > I am not even sure DeepSeek may not have used an > > > > > implementation of the sweeping algorithm that is under ACM license > > > > > and would not be allowed on CRAN. > > > > > > > > > > > > > > I wonder how one handles this matter? Will this be the future > > > > > of code writing (for R and other languages)? I would really > > > > > appreciate to hear your opinion or a hint to a discussion about it. > > > > > > > > > > > > > > Hans Werner > > > > > > > > > > > > > > ______________________________________________ > > > > > [email protected] mailing list -- To UNSUBSCRIBE and more, > > > > > see https://stat.ethz.ch/mailman/listinfo/r-help > > > > > PLEASE do read the posting guide > > > > > https://www.R-project.org/posting-guide.html > > > > > and provide commented, minimal, self-contained, reproducible code. The information contained in this email and any attachments thereto may be privileged or confidential and are only intended for the exclusive use and attention of the addressed recipient. If you have received this email by mistake, please delete the email and all attachments and advise the sender immediately. Should you have any questions related to OPSI Systems' POPIA or GDPR compliance, please contact [email protected] or you may refer to our Privacy Policy.<https://www.opsisystems.com/page/privacypolicy> ______________________________________________ [email protected] mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide https://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.

