File-storage and file-sharing: know the online safety risks, the rules, and how to comply

Published: 21 November 2025

The Online Safety Act makes online businesses legally responsible for keeping UK users - especially children - safe online, even if their business is based outside of the UK.

Ofcom is the UK’s independent regulator responsible for implementing the Act has published guidance for providers of file-storage and file-sharing services to:  

  • find out how the rules apply to file-storage and file-sharing services;
  • understand the risks these services can present to users, and
  • know what you need to do to comply.

How the Online Safety Act applies

The Online Safety Act applies to certain types of online services, including those where users can interact with each other, or create, share or upload content (see further below). You can use our toolkit to find out how the rules are likely to apply to file-storage and file-sharing services that enable users to upload, store, manage and share access to digital media.

The rules apply to services that:

  • have links to the UK; and
  • are a ‘user-to-user’ service – an internet service that enables users to generate, share or upload content (such as messages, images, videos, comments, audio) on the service that may be encountered by other users of the service. This includes services that enable user interactions.

The online safety risks in file-storage and file-sharing

File-storage and file-sharing services are more likely to have an increased risk of harm from the following online harms, as evidenced in our Register of Risks:

  • Terrorism - Tech Against Terrorism research suggests that the majority of terrorism content is identified on file-storage and file-sharing services.
  • Child sexual exploitation and abuse, in particular image-based child sexual abuse material (CSAM) - 89% of images or videos detected of livestreamed child sexual abuse were stored on an ‘image-hosting service’ according to Internet Watch Foundation research.
  • Intimate image abuse - analysis of case data from the Revenge Porn Helpline identified file-sharing services as a location for the collection of and sharing of intimate images, where large collections of images can be made and shared with a single link; and found that file sharing sites made up 1% of the sample of public URLs from cases between 2018 and 2022.

These are just some examples of illegal content that can appear on file-storage and file-sharing services. Providers of online services need to assess the risk of 17 kinds of illegal content. You can read more in our Register of Risks (PDF, 5.69 MB) for illegal content. illegal content.  

How to comply

The Online Safety Act introduces legal duties for providers of regulated services to assess the risk of harm from: 

  • illegal content; and 
  • certain kinds of content harmful to children 

It is your responsibility to comply with the law. Ofcom has produced guidance and resources to help you understand the legal duties. You can read these documents in full or start with our guide for services which will help you; 

  • check if the Online Safety Act applies to you;  
  • carry out your illegal content risk assessment, and put in place protections under the illegal content safety and related duties; 
  • complete your children’s access assessment and, if applicable, carry out your children’s risk assessment and put in place protections under the children’s safety and related duties; and 
  • comply with the record-keeping and review duties for the activities above. Service providers are expected to implement the recommended measures in our Illegal Content Codes of Practice, including, for those services at greater level of risk the use of perceptual hash-matching technology, to detect and remove image-based CSAM (see further below). 

Reporting requirements 

The Online Safety Act contains provisions for reporting of child sexual exploitation and abuse (CSEA) content. When the provisions come into effect from Spring 2026 in-scope regulated services will be required to report all detected and unreported CSEA content to the National Crime Agency (NCA).

Until then, providers of services can continue to act as they do now, including reporting under international law, to existing reporting bodies and law enforcement. Suspected CSEA cases with a UK nexus made via alternative channels, including reports made internationally, will continue to be passed to UK law enforcement.

This page provides general guidance to assist providers, it is not a substitute for obtaining your own legal advice if necessary to ensure you comply with your statutory obligations under the Act.

Using perceptual hash matching to detect and remove CSAM 

The use of automated detection technology to detect and remove illegal content is an established industry practice to help prevent perpetrators from exploiting platforms, such as yours.  

One way you can detect and remove CSAM in particular is through perceptual hash matching.

Our Illegal Content Codes of Practice recommend that certain service providers, including file-storage and file-sharing services that are high risk of image-based CSAM, use perceptual hash matching technology effectively to assess whether content is CSAM and, if so, to swiftly take it down or prevent it from being generated, uploaded or shared.

Certain services may be at a greater level of risk, we would normally expect a service to be high risk of image-based CSAM where: 

  • the service is a file-storage and file-sharing service and enables images or videos to be generated, uploaded or shared; or 
  • there is evidence that image-based CSAM has been present to a significant extent on the service.

Your risk level is assessed by you conducting an illegal content risk assessment. If you assess your service at a lower risk, you need to record strong reasons for doing so.  

What is perceptual hash matching? 

‘Hashing’ is an umbrella term for techniques used to create digital ‘fingerprints’ of content, which can then be stored in a database. An algorithm known as a hash function is used to compute a fingerprint, known as a hash, from a file. Comparing such a hash with another hash stored in a database is called hash matching. In the context of online safety, hash matching can be a primary means for the detection of known illegal or otherwise harmful images and videos.  

Hash matching can be used by providers of online services by generating hashes of the content on their service and comparing those against the hashes in the hash database. This is to test whether any uploaded content is a ‘match’ for those images, so that they can be removed from the service if they are illegal or harmful.  

There are two types of hash matching. Ofcom’s Illegal Content Codes of Practice recommend the use of ‘perceptual’ hash matching technology to detect and remove CSAM. This is more likely to detect slightly altered or similar content to known CSAM, rather than ‘cryptographic’ hash matching which can only detect identical images. This allows a greater volume of CSAM to be detected and swiftly removed.

How to implement perceptual hash-matching 

Below is a simplified, step-by-step guide to using perceptual hash-matching technology; however, this is not a substitute for your legal obligations, which you must independently ensure you understand and comply with.

Steps to follow

Show all steps

1

We recommend that you source a database from at least one organisation with expertise in identifying CSAM, reflecting the range of material that is illegal under UK law. You should ensure that you regularly obtain the latest versions of this database. 

You can use a third-party provider who will manage the sourcing and implementation process for you (for example, using a third-party API).  

You may also collect hashes of content that has been previously identified and removed on your own service. You can build the hashing and matching technology using these databases in-house. However, you should ensure that arrangements are in place so that CSAM is correctly identified before hashes of that material are added, and to review cases where material is suspected to have been incorrectly identified as CSAM and remove such hashes where appropriate. 

You may use a mixture of these approaches.  

There are a number of third-party services that can support you in implementing hash matching. For example, this includes, but is not limited to, the national bodies for reporting CSAM: 

These organisations own hash databases for providers to use. There are other third-party providers available to suit a range of service types and pricing models.  

While we cannot recommend any particular hash-matching service, you may wish to speak to online trust and safety industry bodies or safety tech organisations to understand the options available. 

2

This involves analysing any user-generated content in the form of photographs, videos, or visual material which is communicated publicly on the service that may be encountered by UK users of your service. 

We have recommendations in the code around: 

  • how much content should be analysed; 
  • at what point of upload the content should be analysed; and
  • how to configure the precision and recall of the technology.

For further information on how to do this, please consult our Illegal Content Codes of Practice.

3

We recommend that you should put in place a policy to review content detected by the hash-matching technology. Our Illegal Content Codes of Practice provides details of what this should take into account.

You should also keep statistical records about content reviewed which should include: 

  • the number of reviews carried out 
  • the proportion of detected content this represents 
  • the number of false positives identified 

We recommend that you ensure an appropriate policy is put in place to secure any hashes of CSAM from unauthorised access, interference or exploitation (whether by persons who work for you, are providing a service to you, or any other persons). 

For the full detail of our recommendations on the effective use of perceptual hash matching technology, you should review our Illegal Content Codes of Practice.

Right to privacy and freedom of expression related to hash matching 

We understand that privacy is important to your users. Hash matching is an automated analysis of content to check whether its hash (‘digital fingerprint’) matches an internal or external hash database of known illegal content. Only when there is a potential match with a known image of CSAM must the provider of the service review the content to consider if it needs to be removed. 

We have carried out a rights impact assessment to support you in implementing hash-matching technology in a way that protects your users’ right to freedom of expression, while acknowledging that child abuse images are a severe violation of the privacy of the child depicted.  

This measure only applies to content communicated publicly on your service. You can find guidance on how to judge whether content is public or private.   

There are provisions in our recommendation to ensure that: 

  • hashes are correctly identified before being added to databases, and removed if they are not CSAM 
  • services should configure their technology to manage false positives appropriately
  • there should also be an appropriate level of human review and an appeals process for wrongful takedown. 

You can find full details of this in our December 2024 Illegal Harms Statement.  

Additional safety measures considerations 

We have also just closed our Additional Safety Measures Consultation where we proposed additional recommendations for file-storage and file-sharing services.  

You can read our roadmap to regulation and important dates for Online Safety compliance to keep up to date. You can also raise queries and subscribe for updates on any changes to the regulations and what you need to do.