Molly Russell: Is social media regulated in the UK and what can tech firms do to protect users?

Molly, 14, viewed material linked to topics such as depression, self-harm and suicide before ending her life in November 2017. Credit: PA Media

The coroner's ruling that schoolgirl Molly Russell died after suffering from “negative effects of online content”, has highlighted the need for regulation of social media platforms and better systems to protect users from harmful content.

Molly, 14, viewed material linked to topics such as depression, self-harm and suicide before ending her life in November 2017.

In the wake of her death, her family have campaigned for better internet safety and for big technology companies to prioritise the safety and wellbeing of young people over profit.

Just how regulated is social media and how could it be improved to protect vulnerable young people like Molly?

What are the current regulations?

There is very little regulation of social media in the UK, and of what currently exists relates to advertising, copyright law, defamation and libel laws.

It can also fall under a limited set of specific laws to protect people from threats of violence, harassment and offensive, indecent, menacing behaviour online.

Molly, pictured as a young child, died after viewing content related to depression, suicide and self harm on social media Credit: Family handout

How do platforms fight illegal or harmful material?

Platforms instead self-regulate with a mixture of human moderators and artificial intelligence to find and take down illegal or harmful material proactively. They might also act when users report it to them.

Platforms lay out what types of content are and are not allowed on their sites in their terms of service and community guidelines.

Why does the current system not work according to critics?

Campaigners say this system of self-regulation is flawed and does not do enough to keep online spaces safe.

What is and is not regarded as safe or acceptable online can vary widely from site to site, and many moderation systems struggle to keep up with the vast amounts of content being posted.

Concerns have also been raised about algorithms used to serve users with content a platform thinks might interest them. This is often based on a user’s habits when on the site, which can mean someone who searches for material linked to depression or self-harm could be shown more of it in the future.

In addition, some platforms argue that certain types of content which are not illegal – but could be considered offensive or potentially harmful by some – should be allowed to remain online to protect free speech and expression.

Molly Russell died in November 2017 Credit: Family handout/PA

What do the tech companies say?

During the inquest, evidence given by executives from both Meta and Pinterest highlighted the issues.

Pinterest executive Judson Hoffman admitted the platform was “not safe” when Molly accessed it in 2017 because it did not have in place the technology it has now.

Meta executive Elizabeth Lagone’s evidence highlighted the issue of understanding the context of certain posts when she said some of the content seen by Molly was “safe” or “nuanced and complicated”, arguing that in some instances it was “important” to give people a voice if they were expressing suicidal thoughts.

Meta was represented in court by executive Elizabeth Lagone Credit: PA Stills

How is the government addressing these concerns ?

The Online Safety Bill, due to be reintroduced to parliament soon, seeks to change this landscape and force social media platforms to take action to protect users from online harms.

The bill would, for the first time, compel platforms to protect users from online harm, particularly children, by requiring them to take down illegal and other harmful content.

Want a quick and expert briefing on the biggest news stories? Listen to our podcast to find out What You Need To Know

Companies will be required to spell out clearly in their terms of service what content they consider to be acceptable and how they plan to prevent harmful material from being seen by their users.

It is also expected to require firms to be more transparent about how their algorithms work and to set out clearly how younger users will be protected from harm.

The new regulations will be overseen by Ofcom and those found to breach the rules could face large fines or be blocked in the UK.