This week, a real video of CNN correspondent Clarissa Ward ducking for cover from rockets near the Israel-Gaza border went viral with fake audio undermining her reporting. Earlier this month, an AI-generated clip of CBS This Morning co-host Gayle King, promoting a product she’d never used or even heard of, was circulating online. A deepfake of CNN anchor Anderson Cooper, stating that former president Donald Trump was “ripping us a new asshole,” also recently went viral after Trump and his son shared it on Truth Social and Twitter. And Sahay’s purported news broadcasts—which helped him amass an audience of millions—have included more than a dozen prominent anchors from a range of TV networks who deliver problematic commentary and interview the creator as he mocks school shootings, the September 11 terrorist attacks, rape victims and others harmed by criminals.
In many cases, these made-up segments featuring real-life broadcasters are drawing more eyeballs than legitimate clips posted on news organizations’ blue-check social media accounts. One of Sahay’s recent deepfakes of Face The Nation’s Margaret Brennan, for example, was liked more than 300,000 times on TikTok, while the most popular video posted the same day on Face The Nation’s official TikTok account drew just 7,000 likes. Sahay was banned from TikTok as Forbes was reporting this story and did not respond to multiple requests for comment and a detailed list of questions. Even though his TikTok account is now dormant, Sahay’s far-reaching videos are still easy to find on the platform, reposted by other users, and a number of them remain live on YouTube.
Most large, mainstream social media platforms have policies against deepfakes. TikTok spokesperson Ariane Selliers said, “TikTok requires creators to label realistic AI-generated content,” either with a sticker or in a caption, and that TikTok “continues to remove content that can harmfully mislead or impersonate people.” (Most of Sahay’s videos did not include any such disclaimer.) TikTok prohibits deepfakes containing the likeness of private individuals and “while we provide more latitude for public figures, we do not want them to be the subject of abuse, or for people to be misled about political or financial issues,” the company states in its rulebook. YouTube spokesperson Elena Hernandez said “we’ve long had misinformation policies to prohibit content that has been technically manipulated or doctored in a way that poses a serious risk of egregious harm.”
CNN spokesperson Emily Kuhn declined to comment on the deepfake news segments, but she said the manipulated audio in the clip of CNN’s correspondent near the Israel-Gaza border “is fabricated, inaccurate and irresponsibly distorts the reality of the moment that was covered live on CNN, which people should watch in full for themselves on a trusted platform.” O’Donnell, the CBS anchor, did not immediately respond to a request for comment, but CBS News spokesperson April Andrews said “CBS takes its intellectual property rights very seriously.” BBC spokesperson Robin Miller said that “whenever we become aware of a deepfake video, our lawyers take action” and that “in a world of increasing disinformation, we urge everyone to check links and URLs to ensure they are getting news from a trusted source.”
A new era for deepfakes
Deepfakes, and lower-budget “cheapfakes,” are not new—and this month alone, they’ve been used to target everyone from Tom Hanks (whose AI doppelganger was promoting a dental plan) to YouTube’s top creator MrBeast (who appeared to be hosting “the world’s largest iPhone 15 giveaway”). “Are social media platforms ready to handle the rise of AI deepfakes?” MrBeast, whose real name is Jimmy Donaldson, warned last week on Twitter. “This is a serious problem.” (Disclosure: Donaldson is slated to become a member of Forbes’ board when a planned sale of the company is completed.) The Facebook Oversight Board also announced this week a case related to a manipulated video of President Joe Biden.
But doctored news broadcasts relying on the voices and faces of high-profile journalists appear to be a newer, and potentially more dangerous, tack. Earlier this year, social media analytics firm Graphika put out research on “a new and distinctive form of video content” it had found on Facebook, Twitter and YouTube: AI-generated news anchors that were part of “a pro-Chinese political spam operation.” (Chinese state media have also experimented with AI news anchors.) And longer, higher-production deepfake videos, like some of the news segments from Sahay, continue to gain traction across platforms.
“We are going to have to get more serious about protecting the rights of the people whose likeness and voice are being co-opted.”
“Videos of news anchors is a compelling vessel for delivering disinformation,” said Hany Farid, a professor at UC Berkeley’s School of Information who has testified before Congress on deepfakes. “In many cases, the anchors are known to viewers and trusted, and even if not, the general news format is familiar and therefore more trusted.”
“The results, while still not perfect, are very good, particularly when viewed on a mobile device and when viewers are moving fast through social media,” added Farid, whose research has focused on human perception and image analysis. “We are going to have to get more serious about protecting the rights of the people whose likeness and voice are being co-opted.”
Deepfakes are often deployed in memes to make people laugh, and the more extreme scenario—that this manipulated content could be used to spread mis- or disinformation aimed at changing the course of elections, company IPOs and other high-stakes events—has not come to pass. But the 2020 election notably predated the AI explosion ushered in by ChatGPT, which earlier this year became one of the fastest-growing consumer apps in history and has contributed to widespread adoption of the tech and billions in funding to AI startups.
Now, nearly anyone can employ easy-to-use, readily available AI software to make videos or audio of public figures (or average people) doing and saying things they never actually did. That has heightened fears that AI will transform the looming 2024 elections into a chaotic Wild West—a concern only intensified by the rise in deepfakes being weaponized as seemingly genuine news reports.
The biggest worry, according to lawyer Kevin Goldberg, the First Amendment specialist at the DC-based nonprofit Freedom Forum, is that videos using real news anchors in realistic-looking situations could be used as an “aggressive form of misinformation… a real concern we're going to have to deal with going in toward the 2024 elections.”
But he cautioned that “we also don't want to overreact.”
“Yes, there's potential for mischief here,” he told Forbes. “But our legal system, and our society, are more than well-equipped to handle this new medium.”