MPs are investigating how social media fuelled last summer’s racist attacks – read our submission

At Foxglove, we are very familiar with how violence and hate on social media can fuel mayhem and murder that explodes in the real world. For over two years, we’ve been supporting the case of Abrham Meareg, an Ethiopian man whose father, a respected chemistry professor at Bahir Dar University, was murdered after he was doxed on Facebook.

We also keep an eye on violence closer to home that is incubated on social media. Like this summer, when racist far-right attacks across the UK culminated in the firebombing of a Holiday Inn housing refugees in Rotherham, South Yorkshire. 

So when we heard that the new parliamentary Science, Innovation and Technology Committee was opening an inquiry into the role social media played in inciting the violence of last summer, Foxglove was pleased to make submissions. You can read them here:

We encouraged the committee to investigate two main areas: social media’s algorithms that determine what users see, hear and read on platforms like Facebook, X and TikTok; and the essential safety work of social media done by humans – content moderators.

Anyone familiar with Foxglove’s work with content moderators will know their work is extremely difficult and happens routinely in brutal conditions. 

It is the job of content moderators to look at the really bad stuff people post to social media: murder, the sexual exploitation of children, torture, suicide, animals being torn apart and abused. As well as hate and content that incites violence or genocide. 

As Facebook whistleblower Frances Haugen has said, humans have to look at this horror because Meta’s automated tools aren’t smart enough to do so. That means that human content moderators are the most important safety workers in social media. And that’s why we think the Committee should take a hard look at how Big Tech runs content moderation and hamstrings its own safety workers from doing this essential job. 

The second point of our submission focuses on the algorithms of platforms, like Facebook, that decide what to show you when you open the app. These algorithms are designed to promote exactly the kind of violent and extreme content that content moderators exist to combat. The kind of content that encouraged the killers of Abhram’s father to murder him – and gave them his home address.

Why? Cash, and lots of it. Social media makes most of its money from advertising. Meta’s ad income for 2023 was just north of $130 billion. Its advertising model is based on engagement. The longer they keep us clicking, the longer they can serve us ads. And the best way to keep us clicking is to serve up the most toxic and shocking content to provoke an emotional reaction. 

So, you have a business model that depends on getting as many users as possible – in Meta’s case, around 3 billion – then serving more and more extreme content to keep them engaged for ads, while systematically under-resourcing the staff who are charged with keeping them safe.

That is the status quo of social media that lit the match on the explosion of violence that ended with the burning of a hotel housing refugees in Rotherham.

In truth, what happened last summer wasn’t a tragic accident, or a systems failure. It was the logical result of a product working as designed. 

And soon enough, unless it is stopped, it will do it again.

Foxglove will do all we can to support the committee with its inquiry.

To keep updated on this work, hit the button below: