Reddit user Bootinbull signed up for their account on October 14, 2015. At 2.45 am they headed to the r/gaming subreddit and posed a question to the community: Was the Xenosaga series of games influenced by Dan Simmons’ Hyperion cantos?
While the question may have been burned into Bootinbull’s mind, it wasn’t something that bothered others on Reddit. The post received a grand total of zero comments. Still, Bootinbull kept going, posting on r/4chan, r/gaming, and r/cats. Two weeks later, they were submitting a picture of a dog reclining in a deckchair to r/aww, while also taking part in discussions on the site about the challenge Europe faced with an influx of refugees and Turkey’s geopolitical maneuvering.
Bootinbull continued adding to Reddit, alternately posting cute pictures of dogs alongside debating China and Russia’s future role in the world. But Bootinbull wasn’t real—at least not in our traditional understanding. They were a Russian troll account, likely paid by the state to try and upend conventional online discourse and push the country’s talking points to the masses. The account is one of 1,248 identified by researchers from a consortium of British, American, and European universities as Russian-sponsored trolls, operating on the world wide web. The academics identified the accounts as questionable after tracking the behaviors of 335 users identified by Reddit as trolls back in 2017. Reddit continue to track spam bots and trolls, processing 7.9 million reports of “content manipulation” in the second quarter of 2021.
“These are accounts that are controlled by actual people,” says Gianluca Stringhini, assistant professor at Boston University, and one of the researchers who identified the troll accounts using a tool they call TrollMagnifier. The tool is an artificial intelligence model trained on the behavior of known Russian troll accounts, and it purports to be able to identify new, still uncovered troll accounts active on Reddit. (The findings are published in an academic paper that has yet to be released onto the arXiv preprint server. WIRED exclusively obtained an early copy ahead of release.) “They will simulate traditional activity. They will argue with each other. And we’ve learned by analyzing the [questionable] accounts that were released by Twitter and Reddit in the past what they’ll do.”
The playbook to subvert democracy and sow dissention often starts with social media. And it happens on sites like Reddit, through accounts such as the aforementioned Bootinbull and JerryRansom, both of which were identified by Stringhini and his colleagues, trying to drip-feed a controversial message while using a stream of more regular and mundane posts as cover. Like Bootinbull, JerryRansom used the same cute animal photos and 4chan-baiting memes, then gradual slid into political discourse—with added posts in r/sexygirls. Notably, many of the accounts that Stringhini says have similar behavior to those definitively linked to Russia have previously posted on r/aww, which encourages users to share photographs that may prompt an “aww”-like response—often over cuddly animals.
The way the troll accounts behave is possible to discern through what Stringhini calls “loose coordination patterns.” With less sophisticated bot accounts, their nature can be identified through timing and the type of content posted—because they often pump out the same message from a number of different Twitter accounts that have either been specially created for the purpose, or coopted from innocent patsies through cyberattacks that steal their login details. But troll accounts require deeper analysis.
Most PopularThe End of Airbnb in New York
BusinessAmanda Hoover
This Is the True Scale of New York’s Airbnb Apocalypse
BusinessAmanda Hoover
Starfield Will Be the Meme Game for Decades to Come
CultureWill Bedingfield
The 15 Best Electric Bikes for Every Kind of Ride
GearAdrienne So
The troll method—which involves real, human beings behind the accounts, rather than preprogrammed bots—has become more popular as the old blunt automated tools lose their power. “There's less obvious use of bot networks now, because I think they have been so documented, and people expose them all the time,” says Eliot Higgins, founder of Bellingcat, which documents and uncovers the use of such campaigns and open-source intelligence, often focused on Russia. “Trolls tend to be more impactful, because then they're taking advantage of the kind of natural development of these communities online, rather than trying to build something from scratch, which is a lot more difficult to do.”
Instead, Russian trolls build up fake personas online, trying to ingratiate themselves into preexisting Reddit communities, and then move the conversation on to their true aims. Like Bootinbull and JerryRansom, they start with innocuous posts about dogs and animals before pivoting to geopolitics. The goal is to make the person behind the accounts seem more realistic, and more human—thus making it easier to seed the more contentious content. Their focus, says Stringhini and colleagues’ research, is on fractious social issues: Troll accounts leveraged the divide over Black Lives Matter, and in US presidential elections—arguably being a factor in propelling Donald Trump to victory over Hillary Clinton in 2016. Posts from Facebook troll farms were seen by 140 million Americans ahead of the 2020 election, according to an internal document compiled by the social media platform. Yet they’re also focused on topics perceived to ingratiate themselves into the Reddit community. “They are strictly pro-cryptocurrencies, and they advocate for it on social media,” says Stringhini, “while at the same time the same account may push some political discourse as well. They try to blend in.”
It’s all part of the handbook state-sponsored Russian trolls are given to operate under—and is one that is increasingly commonplace among a number of different countries. Stringhini points to Russia, China, Venezuela, and Iran as nations that are trying to shape conversation through organized social media troll campaigns. Yet despite trying to give off the veneer of normality, the academics have found some tells that could suggest inauthenticity. Troll accounts tend to post less than 10 percent as many comments as a “real” account on Reddit, based on a random sample, suggesting that the pretense of reality is difficult to keep up for a long time, or that they give up when they think their work is done. Conversely, they’re more willing to broadcast out than to take part in conversation: They make an average of 42 submissions during their lifetime, compared to a non-troll account’s 32. Most tellingly, and in line with the way deep-cover spies are often found out because they end up meeting known spies in a dead-drop situation, state-sponsored social media trolls are often found out because they can’t help but post on each other’s threads.
The use of such analysis is welcomed by those monitoring disinformation and tech policy. “New online safety regulators and independent auditors should be looking at deploying tech such as TrollMagnifier, to assess existing safety systems, thereby making social media more accountable for online harms,” says Max Beverton-Palmer, director of the internet policy unit at the Tony Blair Institute for Global Change. A Reddit spokesperson says their policies prohibit content manipulation, which covers coordinated disinformation campaigns as well as any content presented to mislead or falsely attributed to an individual or entity. “We have dedicated teams that detect and prevent this behavior on our platform using both automated tooling and human review,” the spokesperson says. “As a result of our teams’ efforts, we remove 99 percent of policy-breaking content before a user sees it.”
Most PopularThe End of Airbnb in New York
BusinessAmanda Hoover
This Is the True Scale of New York’s Airbnb Apocalypse
BusinessAmanda Hoover
Starfield Will Be the Meme Game for Decades to Come
CultureWill Bedingfield
The 15 Best Electric Bikes for Every Kind of Ride
GearAdrienne So
But Higgins and another researcher, Yevgeniy Golovchenko of the University of Copenhagen, who studies disinformation, are circumspect about the replicability of the academics’ troll hunting approach. Some organic behavior can appear troll-like, Higgins says, pointing to errors in earlier, more basic academic research that wasn’t able to as accurately distinguish between inauthentic and authentic behavior. “I would be interested in diving into the data that’s being produced from this to see how much of it is just communities who are interacting with each other versus actual state-sanctioned trolls,” he says. Golovchenko is concerned about the results themselves. “It’s a very interesting topic, and the paper is ambitious, but I’m not entirely sure how to evaluate the accuracy of the tool the authors present,” he says. For one thing, the tool is trained on accounts that have been discovered—so, the worst-designed ones, which perhaps only represent the tip of the iceberg of state-sponsored disinformation capabilities. “These accounts are made to be undetected,” says Golovchenko. “Studies like this will always give us the bare minimum—by design, because we’re talking about state actors that spend resources to stay hidden.”
Others are more welcoming of the paper and its findings. “The proof of any tool is in its application, and, judging by the results here, these researchers have developed a clever way of scaling up the identification of accounts engaged in coordinated troll activity,” says Ciaran O’Connor, an analyst at the Institute for Strategic Dialogue, who monitors disinformation and extremism online. O’Connor does, however, point out that it’s difficult to do such tracking without a seed list of known accounts to see echoes of—something possible on Reddit, which is open about releasing data to help researchers. “Transparency from social media platforms is an ongoing challenge, and we will also argue that more data is always the answer to help us, and subsequently help platforms help themselves, to understand and tackle emerging tactics, tools, and narratives favored by bad actors on social media,” he says.
That transparency has helped researchers spot troll-like behavior—and is an act of beneficence the researchers hope they can pay back to Reddit. “I think this kind of technique is definitely going to help social network companies,” says Stringhini. He points out that while they have more indicators to look at that could provide hints about a troll users’ real background, such as IP addresses and browser fingerprints, examining the pattern of content posting could help identify more inauthentic users more accurately.
Finding those inauthentic users could still prove tricky, though, given the mundanities of Reddit. Bootinbull went silent on the platform on December 3, 2015, 50 days after first posting, the mission to stir hearts and minds seemingly unsuccessful—or concluded. Their farewell post? This one: Responding to a setup for a lengthy joke in r/jokes beginning with a woman asking a man “Do you drink beer?” Bootinbull blundered in with the reply, “Just beer :)”.
More Great WIRED Stories📩 The latest on tech, science, and more: Get our newsletters!The 10,000 faces that launched an NFT revolutionWhy Zillow couldn’t make algorithmic house pricing workThe race to develop a vaccine against every coronavirusDoom’s creator goes after “doomscroll”“The Great Resignation” misses the point👁️ Explore AI like never before with our new database📱 Torn between the latest phones? Never fear—check out our iPhone buying guide and favorite Android phones