-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

This is from a few days ago -- all of them are doing it.  I'm assuming this 
story wasn't written by some AI software.  This email wasn't.  Which is exactly 
what it'd say if it was...


By Shane Goldmacher

The Democratic Party has begun testing the use of artificial intelligence to 
write first drafts of some fund-raising messages, appeals that often perform 
better than those written entirely by human beings.

Fake A.I. images of Donald J. Trump getting arrested in New York spread faster 
than they could be fact-checked last week.

And voice-cloning tools are producing vividly lifelike audio of President Biden 
— and many others — saying things they did not actually say.

Artificial intelligence isn’t just coming soon to the 2024 campaign trail. It’s 
already here.

The swift advance of A.I. promises to be as disruptive to the political sphere 
as to broader society. Now any amateur with a laptop can manufacture the kinds 
of convincing sounds and images that were once the domain of the most 
sophisticated digital players. This democratization of disinformation is 
blurring the boundaries between fact and fake at a moment when the acceptance 
of universal truths — that Mr. Biden beat Mr. Trump in 2020, for example — is 
already being strained.

And as synthetic media gets more believable, the question becomes: What happens 
when people can no longer trust their own eyes and ears?

Inside campaigns, artificial intelligence is expected to soon help perform 
mundane tasks that previously required fleets of interns. Republican and 
Democratic engineers alike are racing to develop tools to harness A.I. to make 
advertising more efficient, to engage in predictive analysis of public 
behavior, to write more and more personalized copy and to discover new patterns 
in mountains of voter data. The technology is evolving so fast that most 
predict a profound impact, even if specific ways in which it will upend the 
political system are more speculation than science.

“It’s an iPhone moment — that’s the only corollary that everybody will 
appreciate,” said Dan Woods, the chief technology officer on Mr. Biden’s 2020 
campaign. “It’s going to take pressure testing to figure out whether it’s good 
or bad — and it’s probably both.”

OpenAI, whose ChatGPT chatbot ushered in the generative-text gold rush, has 
already released a more advanced model. Google has announced plans to expand 
A.I. offerings inside popular apps like Google Docs and Gmail, and is rolling 
out its own chatbot. Microsoft has raced a version to market, too. A smaller 
firm, ElevenLabs, has developed a text-to-audio tool that can mimic anyone’s 
voice in minutes. Midjourney, a popular A.I. art generator, can conjure 
hyper-realistic images with a few lines of text that are compelling enough to 
win art contests.

“A.I. is about to make a significant change in the 2024 election because of 
machine learning’s predictive ability,” said Brad Parscale, Mr. Trump’s first 
2020 campaign manager, who has since founded a digital firm that advertises 
some A.I. capabilities.

Disinformation and “deepfakes” are the dominant fear. While forgeries are 
nothing new to politics — a photoshopped image of John Kerry and Jane Fonda was 
widely shared in 2004 — the ability to produce and share them has accelerated, 
with viral A.I. images of Mr. Trump being restrained by the police only the 
latest example. A fake image of Pope Francis in a white puffy coat went viral 
in recent days, as well.

Many are particularly worried about local races, which receive far less 
scrutiny. Ahead of the recent primary in the Chicago mayoral race, a fake video 
briefly sprung up on a Twitter account called “Chicago Lakefront News” that 
impersonated one candidate, Paul Vallas.

“Unfortunately, I think people are going to figure out how to use this for evil 
faster than for improving civic life,” said Joe Rospars, who was chief 
strategist on Senator Elizabeth Warren’s 2020 campaign and is now the chief 
executive of a digital consultancy.

Those who work at the intersection of politics and technology return repeatedly 
to the same historical hypothetical: If the infamous “Access Hollywood” tape 
broke today — the one in which Mr. Trump is heard bragging about assaulting 
women and getting away with it — would Mr. Trump acknowledge it was him, as he 
did in 2016?

The nearly universal answer was no.

“I think about that example all the time,” said Matt Hodges, who was the 
engineering director on Mr. Biden’s 2020 campaign and is now executive director 
of Zinc Labs, which invests in Democratic technology. Republicans, he said, 
“may not use ‘fake news’ anymore. It may be ‘Woke A.I.’”

For now, the frontline function of A.I. on campaigns is expected to be writing 
first drafts of the unending email and text cash solicitations.

“Given the amount of rote, asinine verbiage that gets produced in politics, 
people will put it to work,” said Luke Thompson, a Republican political 
strategist.

As an experiment, The New York Times asked ChatGPT to produce a fund-raising 
email for Mr. Trump. The app initially said, “I cannot take political sides or 
promote any political agenda.” But then it immediately provided a template of a 
potential Trump-like email.

The chatbot denied a request to make the message “angrier” but complied when 
asked to “give it more edge,” to better reflect the often apocalyptic tone of 
Mr. Trump’s pleas. “We need your help to send a message to the radical left 
that we will not back down,” the revised A.I. message said. “Donate now and 
help us make America great again.”

Among the prominent groups that have experimented with this tool is the 
Democratic National Committee, according to three people briefed on the 
efforts. In tests, the A.I.-generated content the D.N.C. has used has, as often 
as not, performed as well or better than copy drafted entirely by humans, in 
terms of generating engagement and donations.

Party officials still make edits to the A.I. drafts, the people familiar with 
the efforts said, and no A.I. messages have yet been written under the name of 
Mr. Biden or any other person, two people said. The D.N.C. declined to comment.

Higher Ground Labs, a small venture capital firm that invests in political 
technology for progressives, is currently working on a project, called Quiller, 
to more systematically use A.I. to write, send and test the effectiveness of 
fund-raising emails — all at once.

“A.I. has mostly been marketing gobbledygook for the last three cycles,” said 
Betsy Hoover, a founding partner at Higher Ground Labs who was the director of 
digital organizing for President Barack Obama’s 2012 campaign. “We are at a 
moment now where there are things people can do that are actually helpful.”

Political operatives, several of whom were granted anonymity to discuss 
potentially unsavory uses of artificial intelligence they are concerned about 
or planning to deploy, raised a raft of possibilities.

Some feared bad actors could leverage A.I. chatbots to distract or waste a 
campaign’s precious staff time by pretending to be potential voters. Others 
floated producing deepfakes of their own candidate to generate personalized 
videos — thanking supporters for their donations, for example. In India, one 
candidate in 2020 produced a deepfake to disseminate a video of himself 
speaking in different languages; the technology is far superior now.

Mr. Trump himself shared an A.I. image in recent days that appeared to show him 
kneeling in prayer. He posted it on Truth Social, his social media site, with 
no explanation.

One strategist predicted that the next generation of dirty tricks could be 
direct-to-voter misinformation that skips social media sites entirely. What if, 
this strategist said, an A.I. audio recording of a candidate was sent straight 
to the voice mail of voters on the eve of an election?

Synthetic audio and video are already swirling online, much of it as parody.

On TikTok, there is an entire genre of videos featuring Mr. Biden, Mr. Obama 
and Mr. Trump profanely bantering, with the A.I.-generated audio overlaid as 
commentary during imaginary online video gaming sessions.

On “The Late Show,” Stephen Colbert recently used A.I. audio to have the Fox 
News host Tucker Carlson “read” aloud his text messages slamming Mr. Trump. Mr. 
Colbert labeled the audio as A.I. and the image on-screen showed a blend of Mr. 
Carlson’s face and a Terminator cyborg for emphasis.

The right-wing provocateur Jack Posobiec pushed out a “deepfake” video last 
month of Mr. Biden announcing a national draft because of the conflict in 
Ukraine. It was quickly seen by millions.

“The videos we’ve seen in the last few weeks are really the canary in the coal 
mine,” said Hany Farid, a professor of computer science at University of 
California at Berkeley, who specializes in digital forensics. “We measure 
advances now not in years but in months, and there are many months before the 
election.”

Some A.I. tools were deployed in 2020. The Biden campaign created a program, 
code-named Couch Potato, that linked facial recognition, voice-to-text and 
other tools to automate the transcription of live events, including debates. It 
replaced the work of a host of interns and aides, and was immediately 
searchable through an internal portal.

The technology has improved so quickly, Mr. Woods said, that off-the-shelf 
tools are “1,000 times better” than what had to be built from scratch four 
years ago.

One looming question is what campaigns can and cannot do with OpenAI’s powerful 
tools. One list of prohibited uses last fall lumped together “political 
campaigns, adult content, spam, hateful content.”

Kim Malfacini, who helped create the OpenAI’s rules and is on the company’s 
trust and safety team, said in an interview that “political campaigns can use 
our tools for campaigning purposes. But it’s the scaled use that we are trying 
to disallow here.” OpenAI revised its usage rules after being contacted by The 
Times, specifying now that “generating high volumes of campaign materials” is 
prohibited.

Tommy Vietor, a former spokesman for Mr. Obama, dabbled with the A.I. tool from 
ElevenLabs to create a faux recording of Mr. Biden calling into the popular 
“Pod Save America” podcast that Mr. Vietor co-hosts. He paid a few dollars and 
uploaded real audio of Mr. Biden, and out came an audio likeness.

“The accuracy was just uncanny,” Mr. Vietor said in an interview.

The show labeled it clearly as A.I. But Mr. Vietor could not help noticing that 
some online commenters nonetheless seemed confused. “I started playing with the 
software thinking this is so much fun, this will be a great vehicle for jokes,” 
he said, “and finished thinking, ‘Oh God, this is going to be a big problem.’”



--
Glenn English
-----BEGIN PGP SIGNATURE-----
Version: ProtonMail

wsBzBAEBCAAnBQJkKhT7CRDmyitW0WqrdxYhBMRG7aHlwXhPX67r9+bKK1bR
aqt3AABsmwgA5CwRAP/ByfW9G25XWcVr6vQGU12vIoiKAHWB3C7QNy+d/B7Y
1pc706TAuvw5BPpugmiY26EghxL7pDhtCTQ6Af7Wo5qbXL1bzdJPrEs7znwM
1JzFYtZTR/Vjd6l4U7l86i/i5sdVMeACgrxyzDhG9IO1nyhNpWFi0nRLzpE4
yGFzq4UmttKTRGoqx2dXG5vrO/TYbJHw/nHx8maZYtEtvvjTgLIgldEmbidi
b/+xJDO3t1Kj5Xb1j0txqqoINXB8hsQNdSj+Sy3PlSBPA0vaV1v4IJrfnSjL
ugDRs7FVqENohLfczKl6hozIF1dhv+VEHlaotgcQxrjoybbZ28q47Q==
=QXTH
-----END PGP SIGNATURE-----

Reply via email to