In April 2022, it was impossible to escape Depp v Heard. Even if you weren’t streaming every second of the court proceedings between Johnny and Amber, your feeds were undoubtedly clogged with related content. And by “related content” I mean content that primarily trashed Amber Heard. Perhaps you saw tweets declaring #JusticeForJohnny and #AmberIsAnAbuser, or watched people hammily acting out scenes from their marriage on TikTok, layered over with audio of Heard alleging domestic and sexual abuse by the actor, like a dystopian pantomime. Or perhaps you found the trial slipping into your offline, “real” life when getting coffee – the tip jars labelled “Amber” and “Johnny”, with wads of cash stuffed into Depp’s.
At the time, a few journalists and legal experts covering the trial, such as Kat Tenbarge and Lucia Osborne-Crowley, smelt a rat, and suggested that at least some of the Amber Heard hate campaign must be inauthentic. After being inundated with suggested YouTube videos of Heard being “EXPOSED” on the stand, alongside ones of Depp visiting hospital patients dressed as Captain Jack Sparrow – videos I couldn’t seem to shake from my algorithm – I wrote a piece for Dazed, in which I accused die-hard Depp fans of having been “swept up in a highly-orchestrated, seemingly money-no-object PR operation”. But what if it was something messier, stranger, and more troubling, and instigated by bad-faith actors unrelated to the case?
Enter Who Trolled Amber? – a podcast investigation that is both overdue and revelatory. “This story’s horizons are broadening,” reporter and host Alexi Mostrous says in the third episode. Mostrous and his team began by digging into a vast dataset of tweets about Depp and Heard. Alarm bells started ringing almost immediately. One account had tweeted more than 370,000 times since 2021, which, Mostrous calculated, was a post every two minutes, 24 hours a day, for three years. They also find an ostensibly Chilean far-right “political troll who suddenly switches allegiances to attack Amber Heard; Spanish-speaking bot networks posting hundreds of pro-Depp tweets; Thai accounts that tweet once and go viral, and tens of thousands of identical messages left under Amber Heard videos on YouTube”.
The truly shocking revelation at the heart of the series is just how vast and complex the disinformation movement against Heard was. This was not one single campaign, but multiple, hybrid attacks – with bot armies and real people working in tandem. The Depp/Heard saga was never just a story about the public breakdown of a public marriage. Yet, this may well be why the disinformation campaign went under the radar: celebrity culture functioned as a smoke screen.
“There was obviously a huge amount of publicity about the case,” Mostrous tells me. “There were always rumours that there were bots and manipulation used. But I was surprised that, although this case involved so much media time and so much money, no one really picked up the bot issue and ran with it.” He suggests this might partly be because, at the time of the US trial, it wasn’t a pressing issue for the legal teams. “They were more focused on going through the evidence and establishing what had happened. And sometimes it takes a while for these things to settle. It’s not the sort of thing that you can easily analyse in the moment.”
Part of the issue is that “looking into this stuff is really, really difficult,” Mostrous notes. For starters, “there’s an accountability and a transparency problem”. In the world of manipulation, and hacking for hire, “there’s five to 10 to 20 steps between the client, and then his law firm, and then an investigations company based in London that they commission. And then the London-based investigations company commissions an independent but London-based security professional, who knows someone in Israel, who then subcontracts it out to someone in India, who does the hack or the manipulation, feeds the data back up the chain, and then, by the time it gets back to the law firm, there are no fingerprints.” Essentially, the industry is effective precisely because it’s so convoluted.
Yet, in another sense, mass online manipulation has never been simpler. “It’s easy these days to design pieces of software that can create and run multiple social media accounts that look quite genuine,” Mostrous tells me. “That’s not a particularly onerous process anymore, in a way it might have been five or 10 years ago.”
This leads to a conundrum for investigators. “There’s an imbalance between, on the one hand, manipulation campaigns being really easy to create, cheap to put in place, and potentially able to drive conversations,” Mostrous says. “On the other hand, they’re very difficult to detect. They’re super difficult for journalists and researchers, but they’re also not easy even for the platforms to detect, particularly in situations where they’ve cut back on safety teams and on their own resources.” Mostrous sums the situation up plainly: “There’s this imbalance between how easy it is to perpetrate, and how difficult it is to catch. That’s quite a worrying gap.”
While making Who Trolled Amber, Mostrous knew he and his research team would have to be rigorous. “What we didn’t want to do was to find some bots and then just say, okay there are some bots, that means something dodgy happened,” he explains. “Because if you take basically any major public discussion on social media, there will be a small proportion of that conversation that is driven by bots. That doesn’t mean that there’s some nefarious bad guy masterminding it.”
Yet, in this case, it wasn’t a “small proportion”. “What was surprising, at least according to one of the researchers,” Mostrous says, “was that 50 per cent of the conversation around Amber had been inauthentically generated.”
Obviously, this doesn’t mean there weren’t huge numbers of real people who were interested in the case. They made up the majority of accounts tweeting about Depp. However, Mostrous found they were only posting about the trial a handful of times. Bot accounts, by contrast, were tweeting up to 1,000 times a day, meaning “the majority of tweets that were posted, were inauthentic”.
In the podcast, Mostrous compares the role of bots in the Depp/Heard story to that of the agent provocateur, “encouraging and inciting ugly elements that were already present”. Daniel Maki – a former spy who put Mostrous onto the case in the first place – puts it slightly differently. “We’re looking at something here that feels beyond just the general din of the crowded bar,” he says. “This is somebody getting up on stage, ripping off their pants and throwing eggs at people in the audience”. You couldn’t ignore it, even if you tried. But was this amplification or instigation? In other words, who started it?
To begin to answer this, it helps to look at the timeline. The database of tweets the podcast team first digs into were from April 2020 to January 2021 – over a year before the US trial began. One 48-hour window proved to be crucial. On 6 November, Depp announced on Instagram that he’d been fired from the third Fantastic Beasts movie. (A week earlier, a UK trial had ruled against Depp, allowing The Sun newspaper to label him a “wifebeater”.) In the two days that followed Depp’s announcement, a rash of suspicious bot activity flooded the internet. What’s significant about this finding, is that this also took place 15 months before the US trial. Essentially, this suggests that by the time most people were engaging with the story, it was already too late. While it remains unclear and unverifiable who or what initiated the bot activity, the groundwork had already been laid for Heard to be damned in the court of public opinion.
“There was potentially a lot of manipulation, a lot of inauthenticity before the trial,” Mostrous agrees. “During the trial, there were lots of people who were like, ‘okay, we can make money out of this case’, because they had a constant supply of video images that they could cut and splice, and they could make Amber look bad and they could get clicks. But in a way, that was the more predictable end of things,” he says. “By that time, the internet’s opinion on Amber had already been formed.”
Yet, at the same time, this strategy wasn’t plucked out of thin air. After all, the #AmberIsAnAbuser content played into a narrative as old as time itself: “man suffering at the hands of a manipulative, deceitful, evil woman”.
“When you take a step back from this, actually the most interesting thing is the online misogyny,” Mostrous suggests. “There’s so much of it. It makes you quite depressed, because there was a groundswell of hate that was there, just waiting for a case to come up”. As Who Trolled Amber? continues, Mostrous and his team explore possible links between the trolling against Heard and the country of Saudi Arabia. But perhaps there doesn’t need to be a single purpose behind the campaign. “This is a propaganda war,” cyber security expert EJ Hilbert says in the series. The goal is division; destabilisation.
“A lot of disinformation campaigns, especially political ones, have as their ‘objective’ just a sense of instability,” Mostrous agrees. “We saw that in the Russian bot campaign before the US election. I think the more we understand about misinformation campaigns, the more we’re seeing that actually, because they’re so easy to set up, they’re not just limited to political issues anymore.” Indeed, misinformation may be more effectively deployed on issues that approach politics side-on, provoking a culture war.
“There’s no reason why the Depp-Heard case wouldn’t have fallen into that category,” Mostrous tells me. “Because it kind of brings up so many culture war issues about ‘Should you believe all women?’, and “Hasn’t the MeToo movement gone too far?’, and all of that stuff that people, on both sides, feel really, really strongly about.”
Towards the end of Who Trolled Amber, Mostrous describes the investigation as “a warning”. Recently, I’ve encountered similar warnings in Naomi Wolf’s Doppelganger and Sian Norris’s Bodies Under Siege, books that examine coordinated far-right attacks on reproductive rights across the globe. Is this a cultural tipping point, then, where we can start to get to grips with disinformation campaigns? Or has the horse already bolted?
“I think we are slowly coming to terms with the fact this is a big problem,” Mostrous muses. “But at the same time,” he adds, “the technology isn’t standing still either.” As with most tech issues, as we try to catch up, everything accelerates. “One of the things I do worry about is that it’s quite easy to focus on obvious examples of misinformation,” Mostrous says. Deepfake videos, for instance. “Whereas actually, I think if we look at really effective misinformation campaigns, they don’t create lies out of thin air. It’s more that they pose as people who are putting out little bits of truth, but the truth is taken out of context – it’s the out of context bit that drives a real sense of division. And that’s much harder to deal with.”
Effective disinformation relies on information overload. It’s an abuse of how we consume news online now, Mostrous suggests: “If we’re just going scroll, scroll, click, click, flick, flick, then we don’t have time to parse the real from the fake. It’s the same with the Russian attempts to subvert the US elections. They didn’t put out lies or fake news, so much as they harnessed and weaponised real news, in a way that increased division. I think that’s the real danger that we’ve got to face up to.”
‘Who Trolled Amber?’ is available now