Pro-Trump bots swarm DeSantis, Haley
WASHINGTON (AP) — Over the past 11 months, someone has created thousands of fake, automated Twitter accounts — perhaps hundreds of thousands of them — to offer a stream of praise for Donald Trump.
In addition to posting adoring words about the former president, the fake accounts ridiculed Trump’s critics from both parties and attacked Nikki Haley, the former South Carolina governor and UN ambassador who is challenges his former boss for the Republican presidential nomination in 2024.
When it came to Ron DeSantis, the bots aggressively suggested that the Florida governor couldn’t beat Trump but would be a good candidate.
As Republican voters ramp up their 2024 candidates, the creator of the bot network is looking to put a thumb on the scale by using online manipulation techniques developed by the Kremlin to influence the digital platform conversation about candidates while leveraging Twitter’s algorithms to maximize their reach.
The sprawling bot network was uncovered by researchers at Cyabra, an Israeli technology company that shared its findings with The Associated Press. While the identity of those behind the network of fake accounts is unknown, Cyabra’s analysts determined that it was likely set up in the US
“One account will say, ‘Biden is trying to take our guns; Trump was the best,’ and another will say, ‘Jan. 6 was a lie and Trump was innocent,'” said Jules Gross, the Cyabra engineer who first discovered the network. “Those voices are not people. For the sake of democracy, I want people to know this is happening.”
Bots, as they are commonly called, are fake, automated accounts that became notorious after Russia employed them in an attempt to interfere in the 2016 election. While major tech companies have improved their detection of fake accounts, the network identified by Cyabra shows , that they remain a potent force in shaping online political discussion.
The new pro-Trump network is actually three different networks of Twitter accounts, all created in large batches in April, October and November 2022. In total, researchers believe hundreds of thousands of accounts may be involved.
The accounts all contain personal photos of the purported account holder as well as a name. Some of the accounts posted their own content, often in response to real users, while others reposted content from real users, helping to further amplify it.
“McConnell… Traitor!” wrote one of the accounts in response to an article in a conservative publication about GOP Senate Leader Mitch McConnell, one of several Republican critics of Trump whom the network targeted.
One way to measure the effect of bots is to measure the percentage of posts on a given topic that are generated by accounts that appear to be fake. The percentage for typical online debates is often in the low single digits. Twitter itself has said that less than 5% of its active daily users are fake or spam accounts.
However, when Cyabra researchers examined negative posts about specific Trump critics, they found far higher levels of inauthenticity. Nearly three-quarters of the negative posts about Haley, for example, were traced back to fake accounts.
The network also helped popularize a call for DeSantis to join Trump as his vice presidential running mate — an outcome that would serve Trump well and allow him to avoid a potentially bitter matchup if DeSantis enters the race.
The same network of accounts shared overwhelmingly positive content about Trump and contributed to an overall false image of his support online, researchers found.
“Our understanding of what mainstream Republican sentiment is for 2024 is being manipulated by the proliferation of bots online,” the Cyabra researchers concluded.
The triple network was discovered after Gross analyzed tweets about various national political figures and noticed that many of the accounts posting the content were created on the same day. Most of the accounts remain active, although they have relatively modest numbers of followers.
A message left with a spokesman for Trump’s campaign was not immediately returned.
Most bots are not designed to persuade people, but to amplify certain content so more people see it, according to Samuel Woolley, a professor and disinformation researcher at the University of Texas whose latest book focuses on automated propaganda.
When a human user sees a hashtag or piece of content from a bot and reposts it, they do the network’s work for it, and also send a signal to Twitter’s algorithms to further increase the spread of the content.
Bots can also succeed in convincing people that a candidate or idea is more or less popular than reality, he said. More pro-Trump bots can e.g. lead to people overestimating his popularity in general.
“Bots definitely affect the flow of information,” Woolley said. “They’re built to create the illusion of popularity. Repetition is the core weapon of propaganda, and bots are really good at repeating. They’re really good at getting information in front of people’s eyes.”
Until recently, most bots were easy to identify thanks to their clumsy writing or account names that contained meaningless words or long strings of random numbers. As social media platforms became better at detecting these accounts, the bots became more sophisticated.
So-called cyborg accounts are one example: a bot periodically taken over by a human user who can post original content and respond to users in human-like ways, making them much harder to sniff out.
Bots could soon get a lot sneakier thanks to advances in artificial intelligence. New AI programs can create lifelike profile pictures and posts that sound much more authentic. Bots that sound like a real person and implement deepfake video technology can challenge platforms and users in new ways, according to Katie Harbath, a fellow at the Bipartisan Policy Center and a former Facebook director of public policy.
“The platforms have gotten so much better at fighting bots since 2016,” Harbath said. “But the types that we’re starting to see now, with artificial intelligence, they can create fake people. Fake videos.”
These technological advances likely ensure that bots have a long future in American politics — as digital foot soldiers in online campaigns and as potential problems for both voters and candidates trying to defend themselves against anonymous online attacks.
“There’s never been more noise online,” said Tyler Brown, a political consultant and former digital director for the Republican National Committee. “How much of it is malicious or even unintentionally unfactual? It’s easy to imagine people being able to manipulate it.”