Sunday, 16 November 2025

Data protection and the Online Safety Act revisited

The Information Commissioner’s Office has recently published its submission to Ofcom’s consultation on additional safety measures under the Online Safety Act.

The consultation is the second instalment of Ofcom’s iterative approach to writing Codes of Practice for user-to-user and search service providers. The first round culminated in Codes of Practice that came into force in March 2025 (illegal content) and July 2025 (protection of children). A service provider that implements the recommendations in an Ofcom Code of Practice is deemed to comply with the various safety duties imposed by the Act.

The recommendations that Ofcom proposes in this second instalment are split almost equally between content-related and non-content measures (see Annex for a tabular analysis). Content-related measures require the service provider to make judgements about items of user content. Non-content measures are not directly related to user content as such.

Thus the non-content measures mainly concern age assessment, certain livestreaming features and functionality that Ofcom considers should not be available to under-18s, and default settings for under-18s. Two more non-content measures concern a livestream user reporting mechanism and crisis response protocols.

The content-related measures divide into reactive (content moderation, user sanctions and appeals) and proactive (automated content detection in various contexts). Ofcom cannot recommend use of proactive technology in relation to user content communicated privately.

The applicability of each measure to a given service provider depends on various size, risk, functionality and other criteria set by Ofcom. 

Proactive content-related measures are especially controversial, since they involve platforms deploying technology to scan and analyse users’ content with a view to it being blocked, removed, deprioritised or affected in some other way. 

The ability of such technology to make accurate judgements is inevitably open to question, not only because of limitations of the technology itself but also because illegality often depends on off-platform contextual information that is not available to the technology. Inaccurate judgements result in false positives and, potentially, collateral damage to legitimate user content.  

The ICO submissions

What does the ICO have to say? Given the extensive territory covered by the Ofcom consultation, quite a lot: 32 pages of detailed commentary. Many, but not all, the comments concern the accuracy of various kinds of proactive content detection technology. 

As befits its regulatory remit, the ICO approaches Ofcom’s recommendations from the perspective of data protection: anything that involves processing of personal data. Content detection, judgements and consequent action are, from the ICO’s perspective, processes that engage the data protection accuracy principle and the overall fairness of processing.

Although the ICO does not comment on ECHR compliance, similar considerations will inform the compatibility of some of Ofcom’s content-related proactive technology recommendations with Article 10 ECHR (freedom of expression).

The ICO’s main comments include:

  • Asking Ofcom to clarify its evidence on the availability of accurate, effective and bias-free technologies for harms in scope of its "principles-based" proactive technology measures. Those harms are, for illegal content: image based CSAM, CSAM URLs, grooming, fraud and financial services, encouraging or assisting suicide (or attempted suicide); and for content harmful to children: pornographic, suicide, self-harm and eating disorder content. This is probably the most significant of the ICO's suggestions, in effect challenging Ofcom to provide stronger evidential support for its confidence that such technologies are available for all those kinds of harm.
  • For Ofcom’s principles-based measures, the ICO recommends that a provider, when assessing whether a given technology complies with Ofcom’s proactive technology criteria, should have to consider the “impact and consequences” of incorrect detections, including any sanctions that services may apply to users as a result of such detections. Those may differ for different kinds of harm.
  • Suggesting that Ofcom’s Illegal Content Codes of Practice should specify that services should have “particular consideration regarding the use of an unverified hash database” (as would be permissible under Ofcom’s proposed measure) for Intimate Image Abuse (IIA) content.

Before delving into these specific points, some of the ICO’s more general observations on Ofcom’s consultation are noteworthy.

Data protection versus privacy

The ICO gently admonishes Ofcom for conflating Art 8 ECHR privacy protections (involving consideration of whether there is a reasonable expectation of privacy) with data protection.

For example, section 9.158 of the privacy and data protection rights assessment suggests that the degree of interference with data protection rights will depend on whether the content affected by the measures is communicated publicly or privately. This is not accurate under data protection law; irrespective of users’ expectations concerning their content (and/or associated metadata), data protection law applies where services are processing personal data in proactive technology systems. Services must ensure that they comply with their data protection obligations and uphold users’ data protection rights, regardless of whether communications are deemed to be public or private under the OSA.  

The ICO suggests that it may be helpful for Art 8 and data protection to be considered separately.

Data protection, automated and human moderation

The ICO “broadly supports” Ofcom’s proposed measures for perceptual hash-matching for IAA and terrorism content (discussed further below).  However, in this context it again takes issue with Ofcom’s conflation of data protection and privacy. This time the ICO goes further, disagreeing outright with Ofcom’s characterisations:

For example, the privacy and data protection rights assessments for both the IIA and terrorism hash matching measures state that where services carry out automated processing in accordance with data protection law, that processing should have a minimal impact on users’ privacy. Ofcom also suggests that review of content by human moderators has a more significant privacy impact than the automated hash matching process. We disagree with these statements. Compliance with data protection law does not, in itself, guarantee that the privacy impact on users will be minimal. Automation carries inherent risks to the rights and freedoms of individuals, particularly when the processing is conducted at scale.

The ICO’s disagreement with Ofcom’s assessment of the privacy impact of automated processing harks back to the ICO’s comments on Ofcom’s original Illegal Harms consultation last year. Ofcom had said:

Insofar as services use automated processing in content moderation, we consider that any interference with users’ rights to privacy under Article 8 ECHR would be slight.

The ICO observed in its submission to that consultation:

From a data protection perspective, we do not agree that the potential privacy impact of automated scanning is slight. Whilst it is true that automation may be a useful privacy safeguard, the moderation of content using automated means will still have data protection implications for service users whose content is being scanned. Automation itself carries risks to the rights and freedoms of individuals, which can be exacerbated when the processing is carried out at scale.

Hash-matching, data protection, privacy and freedom of expression

In relation to hash-matching the ICO stresses (as it does also in relation to Ofcom’s proposed principles-based measures, discussed below) that accuracy of content judgements impacts not only freedom of expression, but privacy and data protection:

For example, accuracy of detections and the risk of false positives made by hash matching tools are key data privacy considerations in relation to these measures. Accuracy of detections has been considered in Ofcom’s freedom of expression rights assessment, but has not been discussed as a privacy and data protection impact. The accuracy principle under data protection law requires that personal information must be accurate, up-to-date, and rectified where necessary. Hash matching tools may impact users’ privacy where they use or generate inaccurate personal information, which can also lead to unfair consequences for users where content is incorrectly actioned or sanctions incorrectly applied.

Principles-based proactive technology measures - evidence of available technology

The Ofcom consultation proposes what it calls "principles-based" measures (ICU C11, ICU C12, PCU C9, PCU C10), requiring certain U2U platforms to assess available proactive technology and to deploy it if it meets proactive technology criteria defined by Ofcom. 

These would apply to certain kinds of “target” illegal content and content harmful to children. Those are, for illegal content: image based CSAM, CSAM URLs, grooming, fraud (and financial services), and encouraging or assisting suicide (or attempted suicide); and for content harmful to children: pornographic, suicide, self-harm and eating disorder content. 

Ofcom says that it has a higher degree of confidence that proactive technologies that are accurate, effective and free from bias are likely to be available for addressing those harms. Annex 13 of the consultation devotes 7 pages to Ofcom's evidence supporting that.

The ICO says that it is not opposed to Ofcom’s proposed proactive technology measures in principle. But as currently drafted the measures “present a number of questions concerning alignment with data protection legislation”, which the ICO describes as “important points”.

In the ICO’s view there is a “lack of clarity” in the consultation documents about the availability of proactive technology that meets Ofcom's proactive criteria for all harms in scope of the measures. The ICO suggests that this could affect how platforms go about assessing the availability of suitable technology:

…we are concerned that the uncertainty about the effectiveness of proactive technologies currently available could lead to confusion for organisations seeking to comply with this measure, and create the risk that some services will deploy technologies that are not effective or accurate in detecting the target harms.

It goes on to comment on the evidence set out in Annex 13:

Annex 13 outlines some evidence on the effective deployment of existing technologies, but this is not comprehensively laid out for all the harms in scope. We consider that a more robust overview of Ofcom’s evidence of the tools available and their effectiveness would help clarify the basis on which Ofcom has determined that it has a higher degree of confidence about the availability of technologies that meet its criteria. This will help to minimise the risk of services deploying proactive technologies that are incompatible with the requirements of data protection law.

The ICO approaches this only as a matter of compliance with data protection law. Its comments do, however, bear tangentially on the argument that Ofcom’s principles-based proactive technology recommendations, lacking quantitative accuracy and effectiveness criteria, are too vague to comply with Art 10 ECHR.

Ofcom has to date refrained from proposing concrete thresholds for false positives, both in this consultation and in a previous consultation on technology notices under S.121 of the Act. If Ofcom were to accede to the ICO’s suggestion that it should clarify the evidential basis of its higher degree of confidence in the likely availability of accurate and effective technology for harms in scope, might that lead it to grasp the nettle of quantifying acceptable limits of accuracy and effectiveness?

Principles-based proactive technology criteria – variable impact on users

Ofcom’s principles-based measures do set out criteria that proactive technology would have to meet. However, the proactive technology criteria are framed as qualitative factors to be taken into account, not as threshold conditions.

The ICO does not go so far as to challenge the absence of threshold conditions. It supports “the inclusion of these additional factors that services should take into account” and considers that “these play an important role in supporting the accuracy and fairness of the data processing involved.”

However, it notes that:

…the factors don’t recommend that services consider the distinction between the different types of impacts on users that may occur as a result of content being detected as target content.

It considers that:

… where personal data processing results in more severe outcomes for users, it is likely that more human review and more careful calibration of precision and recall to minimise false positives would be necessary to ensure the underpinning processing of personal data is fair.

The ICO therefore proposes that Ofcom should add a further factor, recommending that service providers:

…also consider the impact and consequences of incorrect detections made by proactive technologies, including any sanctions that services may apply to users as a result of such detections. …This will help ensure the decisions made about users, using their personal data, are more likely to be fair under data protection law.

This is all against the background that:

Where proactive technologies are not accurate or effective in detecting the harms in scope of the measures, there is a risk of content being incorrectly classified as target illegal content or target content harmful to children. Such false positive outcomes could have a significant impact on individuals’ data protection rights and lead to significant data protection harms. For example, false positives could lead to users wrongly having their content removed, their accounts banned or suspended or, in the case of detection of CSEA content, users being reported to the National Crime Agency or other organisations.

That identifies the perennial problem with proactive technology measures. However, while the ICO proposal would add contextual nuance to service providers’ multi-factorial assessment of risk of false positives, it does not answer the fundamental question of how many false positives is too many. That would remain for service providers to decide, with the likelihood of widely differing answers from one service provider to the next. Data protection law aside, the question would remain of whether Ofcom’s proposed measures comply with the "prescribed by law" requirement of the ECHR.

Perceptual hash-matching - sourcing image-based IIA hashes

Ofcom’s recommendations include perceptual hash matching against databases of hashes, for intimate image abuse and terrorist content.

Ofcom proposes that for IIA content hash-matching could be carried out against an unverified database of hashes. That is in contrast with its recommendations for CSAM and terrorism content hash-matching. The ICO observes:

Indeed Ofcom notes that the only currently available third-party database of  IIA hashes does not verify the content; instead, content is self-submitted by victims and survivors of IIA.  

Ofcom acknowledges that third party databases may contain some images that are not IIA, resulting in content being erroneously identified as IIA.

Ofcom said in the consultation:

We are not aware of any evidence of unverified hash databases being used maliciously with the aim of targeting content online for moderation. While we understand the risk, we are not aware that it has materialised on services which use hash matching to tackle intimate image abuse.

Under Ofcom's proposals the service provider would be expected to treat a positive match by perceptual hash-matching technology as “reason to suspect” that the content may be intimate image abuse. It would then be expected to subject an “appropriate proportion” of detected content to human review.

According to Annex 14 of the consultation, among the factors that service providers should consider when deciding what proportion of content to review would be:

The principle that content with a higher likelihood of being a false positive should be prioritised for review, with particular consideration regarding the use of an unverified hash database.

The ICO notes that having “particular consideration regarding use of an unverified hash database” does not appear in the proposed Code of Practice measures themselves. It observes:

Having regard to the use of unverified databases is an important privacy and data protection safeguard. It is our view that due to the increased risk of false positive detections where services use unverified hash databases, services may need to review a higher proportion of the content detect [sic] by IIA hash matching tools in order to meet the fairness and accuracy principles of data protection law.

The ICO recommends that the factor should be added to the Code of Practice. 

Other ICO recommendations

Other ICO recommendations highlighted in its Executive Summary include:

  • Suggesting that additional safeguards should be outlined in the Illegal Content Judgements Guidance where, as Ofcom proposes, illegal content judgements might be made about CSAM content that is not technically feasible to review (for instance on the basis of group names, icons or bios). The ICO also suggests that Ofcom should clarify which users involved in messaging, group chats or forums would be classed as having shared CSAM when a judgement is made on the basis of a group-level indicator.
  • As regards sanctions against users banned for CSEA content, noting that methods to prevent such users returning to the service may engage the storage and access technology provisions of the Privacy and Electronic Communication Regulations (PECR); and suggesting that for the purposes of appeals Ofcom should clarify whether content determined to be lawful nudity content should still be classified as ‘CSEA content proxy’ (i.e. prohibited by terms of service), since this would affect whether services could fully reverse a ban.
  • Noting that implementation of tools to prevent capture and recording of livestreams, in accordance with Ofcom’s recommended measure, may also engage the storage and access technology provisions of PECR.
  • Supporting Ofcom’s proposals to codify the definition of highly effective age assurance (HEAA) in its Codes of Practice; and emphasising that implementation of HEAA must respect privacy and comply with data protection law.

Most of the ICO comments that are not included in its Executive Summary consist of various observations on the impact of, and need to comply with, data protection law.

Annex – Ofcom’s proposed additional safety measures

Recommendation

Reference

Categorisation

Livestreaming

 

 

User reporting mechanism that a livestream contains content that depicts the risk of imminent physical harm.

ICU D17

Non-content

Ensure that human moderators are available whenever users can livestream

ICU C16

Reactive content-related

Ensure that users cannot, in relation to a one-to-many livestream by a child (identified by highly effective age assurance) in the UK:

a) Comment on the content of the livestream;

b) Gift to the user broadcasting the livestream;

c) React to the livestream;

d) Use the service to screen capture or record the livestream;

e) Where technically feasible, use other tools outside of the service to screen capture or record the livestream.

ICU F3

Non-content

Proactive technology

 

 

Assess whether proactive technology to detect or support the detection of target illegal content is available, is technically feasible to deploy on their service, and meets the proactive technology criteria. If so, they should deploy it.

ICU C11

Proactive content-related

Assess existing proactive technology that they are using to detect or support the detection of target illegal content against the proactive technology criteria and, if necessary, take steps to ensure the criteria are met.

ICU C12

Proactive content-related

As ICU C11, but for target content harmful to children.

PCU C9

Proactive content-related

As ICU C12, but for target content harmful to children.

PCU C10

Proactive content-related

Intimate image abuse (IIA) hash matching

 

 

Use perceptual hash matching to detect image based intimate image abuse content so it can be removed.

ICU C14

Proactive content-related

Terrorism hash matching

 

 

Use perceptual hash matching to detect terrorism content so that it can be removed.

ICU C13

Proactive content-related

CSAM Hash matching (extended to more service providers)

 

 

Ensure that hash-matching technology is used to detect and remove child sexual abuse material (CSAM).

ICU C9

Proactive content-related

Recommender systems

 

 

Design and operate recommender systems to ensure that content indicated potentially to be certain kinds of priority illegal content is excluded from users’ recommender feeds, pending further review.

ICU E2

Proactive content-related

User sanctions

 

 

Prepare and apply a sanctions policy in respect of

UK users who generate, upload, or share illegal content and/or illegal content proxy, with the objective of preventing future dissemination of illegal content.

ICU H2

Reactive content-related

As ICU H2, but for content harmful to children

and/or harmful content proxy.

PCU H2

Reactive content-related

Set and record performance targets for content moderation function covering the time period for taking relevant content moderation action.

ICU C4, PCU C4

Reactive content-related

CSEA user banning

 

 

Ban users who share, generate, or upload CSEA, and those who receive CSAM, and take steps to prevent their return to the service for the duration of the ban.

ICU H3

Reactive content-related

Highly effective age assurance

 

 

Definitions of highly effective age assurance; principles that providers should have regard to when implementing an age assurance process.

ICU B1, PCU B1

Non-content

Appeals of highly effective age assurance decisions.

ICU D15, ICU D16

Non-content

Increasing effectiveness for U2U settings, functionalities, and user support

 

 

Safety defaults and support for child users

ICU F1 & F2

Non-content

Crisis response

 

 

Prepare and apply an internal crisis response protocol. Conduct and record a post-crisis analysis. Dedicated law enforcement crisis communication channel.

ICU C15 / PCU C11

Non-content

Appeals

 

 

Appeals to cover decisions taken on the basis that content was an ‘illegal content proxy’.

ICU D

Reactive content-related

Appeals to cover decisions taken on the basis that content was a ‘content harmful to children proxy’.

PCU D

Reactive content-related


[Amended 'high' degree of confidence to 'higher' in two places. 17 Nov 2025.] 

Saturday, 20 September 2025

FAQs a million

At the end of May Ofcom circulated a set of FAQs to the attendees of a three day Online Safety Act Explained event that it ran in February 2025. The FAQs address a series of questions for which time did not permit of an answer at the event. Sadly (since these points are of interest to a wider public) Ofcom has not published them on its website. Perhaps it may yet do so, but in the meantime here is a link to an unofficial copy.

Day 3 of the February event was billed as a ‘deep dive’ into three topics: Online safety worldwide, What it means to ‘low risk’, and How to tackle complex risks, including child sexual abuse, grooming and fraud - each with a Q&A session.

The ‘low risk’ session was eagerly anticipated by many who felt a bit like Online Safety Act orphans: apparently – or at least arguably – in scope, but with little in the way of practical help (notwithstanding Ofcom’s interactive regulation checker) to understand whether they were actually caught by the Act.

This was at a time when it was emerging that some community forums run by individuals on a voluntary basis were planning to close down rather than face the Act’s compliance burden or the risk of penalties.

Ofcom, it should be acknowledged, has to work within the constraints of the Act. It has no power to exempt categories of sites from most of the Act’s basic obligations. So if a site is in scope it has to do an illegal content risk assessment, a children’s access assessment and (if children are likely to access the site) a children’s risk assessment. Those (and some other obligations, such as a set of terms and conditions) are required by the Act and there is nothing that Ofcom can do about that.

Where Ofcom does have discretion is in the Code of Practice measures that it chooses to recommend for compliance with the substantive duties imposed by the Act. Ofcom went a long way towards minimising the burden on small, low risk sites (albeit it has now proposed, in its recent additional safety measures consultation, that all in-scope U2U sites should have a sanctions policy).

So it was never realistic to expect Ofcom unilaterally to exempt sites that are in scope of the Act, or to give assurances that it would not enforce against any of them.

There might, however, have been hope of gaining more clarity about two basic questions: exactly which kinds of site are in and out of scope; and who is treated as the provider of an in-scope service (and thus responsible for compliance with the Act)?

The latter question – who is the provider – can be crucial: if I run a community forum hosted on a commercial platform, is the platform the provider of my forum? If so, I can leave compliance to the platform. But if I am treated as the provider of my platform-based community forum, then the compliance obligations are on me and I am – at least theoretically – on Ofcom’s radar for enforcement.

That question has implications for the extent of Ofcom’s own responsibilities, as well as for the individual operator. The government’s Impact Assessment estimated that 25,000 UK service providers would be in scope. But on one answer to the ‘who is the provider?’ question, Ofcom ought to be considering hundreds of thousands, if not millions, of individual operators of forums and groups.

Similarly, consider bloggers who have ‘comments on comments’ functionality enabled. Are they, rather than the blogging platforms, regarded as the providers of a user to user service in relation to comments posted to their individual blogs?

Unfortunately, basic issues of scope and service provider identification are among the areas in which the Act is at its most opaque. As such, one can have considerable sympathy with the hospital pass that Ofcom, trying to make sense of the legislation, has received from Parliament. It would not be surprising if a lively sense of self-preservation within Ofcom HQ led to some reluctance to admit of the possibility that hundreds of thousands, or maybe more, individuals might have to conduct risk assessments and write terms and conditions for their small forums and blogs. Nevertheless, the Act is what it is.

The FAQs are revealing as to which unanswered questions from the event Ofcom has expanded on at length and for which it has opted for inscrutability.

There are careful explanations of the requirements applicable to low-risk services, extensive discussions of how to conduct risk assessments and children’s access assessments, implementation of highly effective age assurance, moderation requirements and a few others. But when it comes to scope Ofcom has for the most part little to say beyond reciting the words of the Act.

Three points in particular are relevant to whether a small, not for profit, service is in or out of scope:

(a) What is meant by a ‘significant number’ of UK users

(b) Whether a voluntary, not-for profit site can have a ‘target market’. If yes, then a UK-targeted site is in scope even if it does not have a significant number of UK users.

(c) Who is responsible for compliance where someone runs a volunteer community group on another social media or similar platform?

Ofcom plays the ‘significant number’ question straight back to the service provider: 

“The Act does not define what is meant by a ‘significant number’ of UK users for the purposes of considering the ‘UK links’ test. Service providers should be able to explain their judgement, especially if they think they do not have a significant number of UK users.”

To continue the cricketing metaphor, Ofcom plays no stroke to ‘target market’: the FAQs do not address the question at all.

For the critical question of who provides the U2U service where a community forum is hosted on a commercial platform, Ofcom adopts a stonewall defence: retreating behind two sentences copied out from S.226 of the Act.

A close-up of a message

AI-generated content may be incorrect. 

For decentralised services Ofcom notes that ‘it is possible’ that if a user operates a decentralised service, and has control over who can use the user-to-user part of the service, they are the service provider under the Act.

One can speculate that if Ofcom were confident that the service provider for individual community forums is always the commercial platform, it would have said so. The fact that it has not done so suggests that Ofcom may reckon that it is arguable that an individual forum operator may, at least potentially, be a service provider.

This dog’s breakfast is, to reiterate, not Ofcom’s fault.  If difficult questions were raised during the legislative process, they were like as not to be ignored, or met with a lecture about Big Tech, algorithms, CSAM, terrorism, children, the Wild West Web, the cesspit of social media and all the rest.

From a purely political perspective donning blinkers is understandable. Revealing a can of worms could have derailed the Bill. Easier to keep the goggles on, power towards Royal Assent on the back of the techlash and worry about sweeping up the pieces of the car crash afterwards.

We are now seeing the results of that. Nevertheless, over a year into implementation of the Act, and with Ofcom’s cumulative expenditure to 2026 projected at £279 million, could it now usefully grasp the nettle of addressing at least some difficult interpretation questions?

For instance, might Ofcom publish non-binding discussion papers around the knotty points of scope and compliance responsibility? Such papers could usefully form the basis of a better-informed future discussion about what the Act should be regulating.

Otherwise, the danger is that we will continue to fumble in the dark, aided only by agenda-driven advocacy from all sides. After this amount of time more illumination would be welcome.

[This blogpost has its origins in a contribution to the “Future of the Online Safety Act 2023” event held by the University of Sussex on 6 June 2025.]


Wednesday, 10 September 2025

The Online Safety Act: guardian of free speech?

A blogpost headed “The Online Safety Act: A First Amendment for the UK” lacks nothing in shock value.  The one thing that we might have thought supporters and critics alike were agreed upon is that the Act, whether for good or ill, does not  and is certainly not intended to  enact the US First Amendment. If anything, it is seen as an antidote to US free speech absolutism.

We might even be tempted to dismiss it as parody. But look more closely and this comes from a serious source: the Age Verification Providers Association. So, perforce, we ought to treat the claim seriously.

The title, we must assume, is a rhetorical flourish rather than a literal contention that the Act compares with an embedded constitutional right capable of striking down incompatible legislation. Drilling down beyond the title, we find the slightly less adventurous claim that Section 22 of the Act (not the Act as a whole) constitutes “A new First Amendment for the UK”.

What is Section 22?

Section 22 places a duty on platforms to “have particular regard to the importance of protecting users’ right to freedom of expression within the law” when implementing the Act’s safety duties.

Like so much in the Act, this duty is not everything that it might seem at first sight. One thing it obviously is not is a constitutional protection conferring the power to invalidate legislation. So then, what are AVPA’s arguments that Section 22, even loosely speaking, is a UK First Amendment?

By way of preliminary, the article’s focus is mostly on the Act’s illegality duties: “The Act’s core aim is to enhance online safety by requiring user-to-user services (e.g. social media platforms) to address illegal content…” and “Far from censoring legal content, the OSA targets only illegal material, …”.

That might come as news to those who were under the impression that the Act’s core aim was protection of children. The children’s duties do target some kinds of material that are legal but considered to be harmful for under-18s. But let us mentally put the children’s duties to one side and concentrate on the illegality duties. Those require blocking or removal of user content, as opposed to hiding it behind age gates.

AVPA’s arguments boil down to five main points:

  • The Act imposes no obligation to remove legal content for adults.
  • The Act’s obligations leave lawful speech untouched (or at least not unduly curtailed).
  • Section 22 is a historic moment, being the first time that Parliament has legislated an explicit duty for online services to protect users’ lawful speech, enforceable by Ofcom.
  • The Section 22 protection goes beyond ECHR Article 10 rights.
  • Section 22 improves on the ECHR’s reliance on court action.

Taking these in turn:

No obligation to remove legal content for adults

This ought to be the case, but is not. It is certainly true that the express ‘legal but harmful for adults’ duties in the original Bill were dropped after the 2022 Conservative leadership election. But when we drill down into the Act’s duties to remove illegal content, we find that they bake in a requirement to remove some legal content. This is for three distinct reasons (all set out in S.192):

1. The test for whether a platform must treat user content as illegal is “reasonable grounds to infer”. That is a relatively low threshold that will inevitably capture some content that is in fact legal.

2. Platforms have to make the illegality judgement on the basis of all relevant information reasonably available to them. Unavailable off-platform information may provide relevant context in demonstrating that user content is not illegal.

3. The Act requires the platform to ignore the possibility of a defence, unless it positively has reasonable grounds to infer that a defence may be successful. Grounds that do in fact exist may well not be apparent from the information available to the platform. 

In this way what ought to be false positives are treated as true positives. 

These issues are exacerbated if platforms are required to engage in proactive detection and removal of illegal content using automated technology.

The AVPA article calls out “detractors labeling it as an instrument of censorship that stifles online expression. This narrative completely misrepresents the Act’s purpose and effect”. One purpose of the Act is certainly to tackle illegal content. Purpose, however, is only the label on the legislative tin. Effect is what the tin contains. Inside this tin we find substantive obligations that will inevitably result in legal content being removed: not through over-caution, but as a matter of what the Act expressly requires.

The inevitable collateral damage to lawful speech embedded in the illegal content provisions has always been a concern to critics who have taken the time to read the Act.

Lawful speech untouched

The article suggests that the Act “ensur[es] lawful speech remains untouched”. However, the Act cannot ensure zero false positives. For many, perhaps most, offences, illegality cannot reliably be adjudged simply by looking at the post. Add to that the effects of S.192, and lawful speech will inevitably be affected to some degree. Later in the AVPA post the somewhat less ambitious claim is made that regulatory oversight will, as the regime matures and is better understood, ensure lawful expression isn’t unduly curtailed.(emphasis added)

These objections are not just an abstract matter of parsing the text of the Act. We can think of the current Ofcom consultation on proactive technology, in which Ofcom declines to set a concrete cap on false positives for its ‘principles-based’ measures. It acknowledges that:

“The extent of false positives will depend on the service in question and the way in which it configures its proactive technology. The measure allows providers flexibility in this regard, including as to the balance between precision and recall (subject to certain factors set out earlier in this chapter). We recognise that this could lead to significant variation in impact on users’ freedom of expression between services.[9.136] (emphasis added)

Section 22’s historic moment

Section 22 marks a historic moment as the first time Parliament has legislated an explicit duty for online services to protect users’ lawful speech and privacy, enforceable by Ofcom.” (underlining in the original)

This proposition is probably at the heart of AVPA’s argument. It goes on that Section 22 mandates that regulated platforms prioritize users’ freedom of expression and privacy”. 

And later: “While the UK lacks a single written constitution, Section 22 effectively strengthens free speech within the UK’s legal framework. It’s a tailored, enforceable safeguard for the digital age making platforms accountable for preserving expression while still tackling illegal content. Far from enabling, censorship the OSA through Section 22 sets a new standard for protecting online discourse.”

A casual reader might assume that Parliament had imposed a new self-standing duty on platforms to protect users’ lawful speech, akin to the general duty imposed on higher education institutions by the Higher Education (Freedom of Speech) Act 2023. That requires such institutions to:

“… take the steps that, having particular regard to the importance of freedom of speech, are reasonably practicable for it to take in order to … secur[e] freedom of speech within the law for— (a) staff of the provider, (b) members of the provider, (c) students of the provider, and (d) visiting speakers.”

The Section 22 duty is quite different: a subsidiary counter-duty intended to mitigate the impact on freedom of expression of the Act’s main safety (and, for Category 1 services, user empowerment) duties.

Thus it applies, as the AVPA article says: “when designing safety measures”. To be clear, it applies only to safety measures implemented in order to comply with the duties imposed by the Act. It has no wider, standalone application.

When that is appreciated, the reason why Parliament has not previously legislated such a duty is obvious. It has never previously legislated anything like an online safety duty – with its attendant risk of interference with users’ legitimate freedom of expression – which might require a mitigating provision to be considered.

Nor, it should be emphasised, does Section 22 override the Act’s express safety duties. It is no more than a “have particular regard to the importance of” duty. 

Moreover, the Section 22 duty is refracted through the prism of Ofcom's safety Codes: the Section 22 duty is deemed to be satisfied if a platform complies with safeguards set out in an Ofcom Safety Code of Practice. What those safeguards should consist of is for Ofcom to decide. 

The relevance of the Section 22 duty is, on the face of it, especially limited when it comes to the platform’s illegal content duties. The duty relates to the user’s right to “freedom of expression within the law”. Since illegal content is outside the law, what impact could the freedom of expression duty have? Might it encourage a platform to err on the side of the user when making marginal decisions about illegality? Perhaps. But a “have particular regard” duty does not rewrite the plain words of the Act prescribing how a platform has to go about making illegality judgements. Those (viz S.192) bake in removal of legal content.

All that considered, it is a somewhat bold suggestion that Section 22 marks a historic moment, or that it sets a new standard for protecting online discourse. Section 22 exists at all only because of the risk to freedom of expression presented by the Act’s safety duties.

The Section 22 protection goes beyond ECHR Article 10 rights.

The AVPA article says that “This is the first time UK domestic legislation explicitly protects online expression beyond the qualified rights under Article 10 of the European Convention on Human Rights (ECHR), as incorporated via the Human Rights Act 1998.” (emphasis added)

If this means only that the right referred to in Section 22 is something different from the ECHR Article 10 right, that has to be correct. However, it is not more extensive. The ‘within the law’ qualification renders the scope of the right narrower than the ECHR. ECHR rights can address overreaching domestic laws (and under the Human Rights Act a court can make a declaration of incompatibility). On the face of it the Section 22 protection cannot go outside domestic laws.

Section 22 improves on the ECHR’s reliance on court action.

Finally, the AVPA article says that “Unlike the ECHR which often requires costly and lengthy court action to enforce free speech rights, Section 22 embeds these protections directly into the regulatory framework for online platforms. Ofcom can proactively warn – and now has – or penalize platforms that over-block legal content ensuring compliance without requiring individuals to go to court. This makes the protection more immediate and practical, …

This is not the place to debate whether the possibility of action by a regulator is in principle a superior remedy to legal action by individuals. That raises questions not only about access to justice, but also about how far it is sensible to put faith in a regulator. The rising chorus of grumbles about Ofcom’s implementation of the Act might suggest 'not very'. But that would take us into the far deeper waters of the wisdom or otherwise of adopting a ‘regulation by regulator’ model. We don’t need to take that plunge today.

Ofcom has always emphasised that its supervision and enforcement activities are concerned with platforms’ systems and processes, not with individual content moderation decisions: “… our job is not to opine on individual items of content. Our job is to make sure that companies have the systems that they need” (Oral evidence to Speaker’s Conference, 3 September 2025).

To be sure, that has always seemed a bit of a stretch: how is Ofcom supposed to take a view on whether a platform’s systems and processes are adequate without considering examples of its individual moderation decisions? Nevertheless, it is not Ofcom’s function to take up individual complaints. A user hoping that Ofcom enforcement might be a route to reinstatement of their cherished social media post is liable to be disappointed.