Is Accountability Finally Coming for Online Platforms?
Ever since a handful of corporations took over the internet, Big Tech bosses have maintained that they aren’t legally responsible for real-world harms originating on and enabled by their platforms. They have largely kept themselves safe from accountability while the danger to the public mounts. In recent weeks, however, the industry has suffered a trio of shocks suggesting that those days may be coming to an end.
First, on August 24 Telegram cofounder and CEO Pavel Durov was arrested in Paris on suspicion of complicity in allowing harm – including the sale of illegal drugs and child sex abuse content – on his platform.
Then, the US Third Circuit Federal court upended decades of precedent by ruling that TikTok could be liable for content that led a ten-year old girl to unintentionally hang herself. In so doing, it broke with the historic protection provided by Section 230 of the Communications Decency Act. Until now, Section 230 has essentially given big tech platforms what Matt Stoller called “a de facto Get Out of Jail free card as long as they could say ‘the algorithm did it.’” It’s hard to overstate the influence Section 230’s protection has had on the shape of the internet as we know it. As Stoller says: “This law created the business model of big tech platforms […] [it] not only made a world where Mark Zuckerberg didn’t have to care whether he was hurting kids, it made a world where he would lose out to rivals if he did.”
Finally, Brazil’s Supreme Court announced a national ban on X early this month, due to the company’s failure to obey court orders on content moderation (more on that in a moment) and appoint a legally responsible person as its representative in the country. Brazil has frozen the assets of X owner Elon Musk’s satellite internet company Starlink and said it is considering further sanctions.
Three companies, three different events on three continents, with one clear message to the tech industry: accountability is coming.
Responding to Durov’s arrest, Telegram issued a statement saying it is “absurd to claim that a platform or its owner are responsible for abuse of that platform.”
It doesn’t seem quite so absurd anymore.
At Foxglove, the tech justice non-profit I co-direct, we are all too familiar with the ways real-world mayhem and murder can be fueled and intensified by social media.
Consider the case of Abrham Meareg, an Ethiopian man whose father was murdered after racist lies about him were posted to Facebook, alongside his home address. As soon as he saw the posts claiming his father was a traitor who “sucked blood,” Abrham says he knew they meant a death sentence.
He pleaded with Facebook to take them down. The company refused. Weeks later, his father was dead, gunned down by men on motorcycles outside his home. He’d never even had a Facebook account. But that didn’t save him. Foxglove is supporting Abrham’s case against Facebook’s owner Meta.
Foxglove is also supporting the case of hundreds of content moderators in Kenya, which, until last year, was Facebook’s moderation hub for content from East and Southern Africa. Facebook outsourced these workers and failed to provide them with the resources they needed to do the work safely – leaving many suffering from mental illnesses including PTSD. When they organized for better conditions, they were all fired.
Content moderators are the key workers responsible for making social media safe to use. It sounds simple – and perhaps it should be – but since the dawn of social media it has proven to be complex, difficult, and dangerous work.
Perhaps that’s why Elon Musk has decided to basically stop doing it at X. Which is a bit like saying it’s difficult to build brick houses that won’t catch fire – so let’s build them out of cardboard instead.
It is worth restating a simple truth: the first job of any company is making sure their product is safe for the public. Everything else comes second. And as the Verge’s Nilay Patel once said: “The essential truth of every social network is that the product is content moderation.”
Durov’s arrest, the Brazilian ban on X, and the Third Circuit ruling are promising signals of a change in approach. At this point, though, they remain exceptions to the rule of Big Tech’s immunity from consequences for its most catastrophic failures. Individuals and communities across the world continue to pay that price with their lives and safety.
This summer in the United Kingdom, where Foxglove is headquartered, we saw far-right racist violence in which platforms like Telegram, X, and Facebook played an “instrumental” role. First they were used to spread lies about a horrendous knife attack, then to organize far-right violence against Muslims, refugees, and immigration services that help folks legally enter the country. This content was allowed to proliferate to great public detriment. And while the perpetrators and online organizers are being duly prosecuted, the government appears to be giving a pass to the companies that contributed to the scale of the atrocity.
We are nonetheless encouraged to see democratic states – and their courts – begin to turn the tables on the unaccountable tech behemoths. But this isn’t a problem we can simply arrest and incarcerate our way out of.
Laws must be changed and political will exerted so that tech platforms like Telegram, X, and Facebook can be held accountable for the violence that they incubate. Platforms must be forced to resource their content moderation in a way that is proportionate to their millions or billions of users, and ensure that the work is safe, dignified and well-paid. Tough, independent oversight is required – no more platforms marking their own homework or forming quasi-independent advisory boards to do it for them.
Perhaps Durov spent some of his time in his Parisian detention contemplating the role he’d played in the recent violence, or the harms to children perpetrated on his invention. Perhaps not. We should know by now that we can’t rely on the conscience of tech barons to save us – we need governments to step into that breach.
This piece originally appeared on Tech Policy Press.