This is the script of my national radio report yesterday regarding the
involvement of AI chatbots in a recent murder-suicide and a teen
suicide. As always there may have been minor wording variations from
this script as I presented the report live on air.
- - -
Well, we know that when it comes to concerns about large language
model generative AI -- the kind that powers search AI overviews and
chatbots and more -- the boosters, the Big Tech AI firms and their
leaders, have suggested that concerns about the dangers of this kind
of AI are overblown, sometimes that the worst case scenarios in terms
of people being potentially hurt are just sci-fi fantasies. Those of
us concerned about how this misinformation laden, not really
intelligent at all tech is being pushed on the public by the Big Tech
Billionaires have of course continued to be concerned, and now some of
our worst fears about this technology are horrifically coming true.
There have been a wide variety of concerns about AI chatbots. They've
been shown by researchers to not only be spewing misinformation, but
also being highly addictive to many people, and designed in ways that
cause the chatbots to tend to agree with users and reinforce their
existing beliefs and concerns.
And now this kind of AI chatbot behavior has moved beyond the realm of
hazardous misinformation into the realm of actual human deaths, and
these make this the single most disturbing technology story I've ever
reported here. Both the recent suicide of a teen, and an earlier
murder-suicide by an adult male who killed his mother and then
himself, have been directly linked to AI chatbots, The AI horror has
arrived.
In both of these cases, investigations have revealed some really quite
terrifying interactions. In the case of the murder-suicide, the
chatbot dialogues frequently confirmed and exacerbated the user's
existing fears that his mother was targeting him, which was not the
reality. In the case of the teen suicide, that chatbot reportedly
ultimately began providing the youth with tips for his plans to hang
himself -- which he ultimately did. There are a lot more sordid
details about these cases on the public record and I'm not going to
talk more about them here.
But the point is that the supposed safeguards that the Big Tech AI
firms tout as being built into the AI chatbots OBVIOUSLY did NOT
perform well in these cases, and in many other situations that we know
of. The firm whose chatbot was involved in the teen suicide reportedly
admitted in internal discussions that their safeguards were lacking.
One of the foundational problems with these chatbots seems to be that
the longer you talk to them -- and remember now most of them can be
configured to remember where you left off in a discussion and continue
another time -- the more likely they are to start agreeing with what
you're saying rather than argue against your stated point of view.
The risks in this kind of situation are obvious. They can quickly
result in a self-reinforcing cycle that rapidly grows more and more
dangerous.
The view of Big Tech still seems to be that they should not be held
responsible for situations like this. When it comes to AI they often
point to what's commonly called "Section 230", the legal framework
that doesn't hold service providers responsible for third party
content. But many observers feel that generative AI including these
chatbots should not have any Section 230 protections at all. Instead,
the firms (and the leadership of these firms) should be held
responsible for tragedies like these just as a human would be, for
example a woman who was sentenced to prison for urging her boyfriend
to kill himself.
All other AI issues aside, it is utterly unacceptable for these
systems to be actively involved in suicides and murders. And
irrespective of the technical challenges involved, these firms and
their leaderships should be held responsible for these tragedies, both
financially and criminally as appropriate. Because it's not actually
the AI doing this harm alone, it's the firms' leaders who are pulling
the AI strings, and it's past time that they start paying personally
for the damage that their creations are doing to their users and to
the rest of society.
- - -
L
- - -
--Lauren--
Lauren Weinstein
[email protected] (https://www.vortex.com/lauren)
Lauren's Blog: https://lauren.vortex.com
Mastodon: https://mastodon.laurenweinstein.org/@lauren
Signal: By request on need to know basis
Founder: Network Neutrality Squad: https://www.nnsquad.org
PRIVACY Forum: https://www.vortex.com/privacy-info
Co-Founder: People For Internet Responsibility
_______________________________________________
google-issues mailing list
https://lists.vortex.com/mailman/listinfo/google-issues