A person using a phone with a padlock graphic overlaid

Pushing platforms to go further: Ofcom sets out more online protections

Published: 30 June 2025
  • First online safety rules are in place, enforcement is underway, change is happening
  • Ofcom now sets out additional ways tech firms should protect people in the UK
  • Proposals include stopping illegal content going viral, protecting children when livestreaming, and tackling intimate images shared without consent 

Tech companies should take action to stop illegal content from going viral, prevent terrorism content and explicit deepfakes at source and stop children from being groomed through livestreams, under new proposals from Ofcom.

The new measures continue Ofcom’s implementation of the Online Safety Act. They build on our illegal harms and children’s safety codes of practice, which are already in place and being enforced.

Online harms and technology evolve quickly. We are keeping pace with developments and listening to the feedback and evidence we have received, and are now pushing platforms to go further by proposing additional measures to strengthen our existing codes. 

Stopping illegal content going viral 

If illegal content spreads rapidly online, it can lead to severe and widespread harm, especially during a crisis, like the violent riots that followed the Southport murders last year, or if a terrorist attack is livestreamed. Recommender systems can exacerbate this. 

To prevent this from happening, platforms should have protocols in place to respond to spikes in illegal content during a crisis, and should not recommend material to users where there are indicators it might be illegal, unless and until it has been reviewed. 

If a site or app allows livestreaming, it should have a system which makes it clear to them when a user reports a livestream where there is a risk of imminent physical harm, and have human moderators available at all times to review content and take action in real time. 

Tackling harms at source 

Huge volumes of content appear online every day, so providers need to make effective use of technology to make their sites and apps safer by design and prevent illegal material from reaching users. They should use a technique called hash matching to detect terrorism content and intimate images that are shared without consent, such as explicit deepfakes. 

We also propose some services should assess the role automated tools can play in detecting content including previously undetected child sexual abuse material, content promoting suicide and self-harm and fraudulent content – and use them where they are available and effective. 

Further protections for children 

Livestreaming has many benefits – for gaming, showcasing talents, citizen-journalism or sharing real-world experiences. However, children risk being groomed, coerced into performing sexual acts, or encouraged into acts of self-harm and suicide while livestreaming – this must change. 

We are proposing that sites and apps should prevent people from posting comments or reactions or sending gifts to children’s livestreams, and they should prevent people from recording children’s livestreams. 

Under our existing codes, providers should already be taking steps to protect children from grooming. Now that we have published our guidance on highly effective age assurance, platforms should use robust age checks to underpin the measures they take to protect children from grooming and harms associated with livestreaming. 

We also expect companies to ban users who share child sexual exploitation and abuse material. 

Oliver Griffiths, Ofcom’s Online Safety Group Director, said: “Important online safety rules are already in force and change is happening. We’re holding platforms to account and launching swift enforcement action where we have concerns. 

“But technology and harms are constantly evolving, and we’re always looking at how we can make life safer online. So today we’re putting forward proposals for more protections that we want to see tech firms roll out.” 

What happens next 

Our consultation on today’s proposals is open until 20 October 2025. We will carefully consider all feedback we receive before making our final decisions, which we aim to publish by Summer 2026. 

We have also today published an update on our other areas of work to implement the Online Safety Act.

END 

Notes to editors: 

  1. In March, duties under the UK’s Online Safety Act came into force that mean social media, search, messaging, gaming and dating sites must now take steps to protect their UK users from illegal content and activity. From the end of July, platforms must start implementing appropriate safety measures to protect children from certain types of harmful material. Wherever in the world a service is based, if it has ‘links to the UK’, it now has duties to protect UK users. This can mean having a significant number of UK users, if the UK is a target market, or if a service is capable of being used by people in the UK and poses a material risk of significant harm to them. Ofcom’s proposed measures only relate to the design or operation of a service in the UK or as it affects UK users.
Back to top