Wednesday, 10 September 2025

The Online Safety Act: guardian of free speech?

A blogpost headed “The Online Safety Act: A First Amendment for the UK” lacks nothing in shock value.  The one thing that we might have thought supporters and critics alike were agreed upon is that the Act, whether for good or ill, does not  and is certainly not intended to  enact the US First Amendment. If anything, it is seen as an antidote to US free speech absolutism.

We might even be tempted to dismiss it as parody. But look more closely and this comes from a serious source: the Age Verification Providers Association. So, perforce, we ought to treat the claim seriously.

The title, we must assume, is a rhetorical flourish rather than a literal contention that the Act compares with an embedded constitutional right capable of striking down incompatible legislation. Drilling down beyond the title, we find the slightly less adventurous claim that Section 22 of the Act (not the Act as a whole) constitutes “A new First Amendment for the UK”.

What is Section 22?

Section 22 places a duty on platforms to “have particular regard to the importance of protecting users’ right to freedom of expression within the law” when implementing the Act’s safety duties.

Like so much in the Act, this duty is not everything that it might seem at first sight. One thing it obviously is not is a constitutional protection conferring the power to invalidate legislation. So then, what are AVPA’s arguments that Section 22, even loosely speaking, is a UK First Amendment?

By way of preliminary, the article’s focus is mostly on the Act’s illegality duties: “The Act’s core aim is to enhance online safety by requiring user-to-user services (e.g. social media platforms) to address illegal content…” and “Far from censoring legal content, the OSA targets only illegal material, …”.

That might come as news to those who were under the impression that the Act’s core aim was protection of children. The children’s duties do target some kinds of material that are legal but considered to be harmful for under-18s. But let us mentally put the children’s duties to one side and concentrate on the illegality duties. Those require blocking or removal of user content, as opposed to hiding it behind age gates.

AVPA’s arguments boil down to five main points:

  • The Act imposes no obligation to remove legal content for adults.
  • The Act’s obligations leave lawful speech untouched (or at least not unduly curtailed).
  • Section 22 is a historic moment, being the first time that Parliament has legislated an explicit duty for online services to protect users’ lawful speech, enforceable by Ofcom.
  • The Section 22 protection goes beyond ECHR Article 10 rights.
  • Section 22 improves on the ECHR’s reliance on court action.

Taking these in turn:

No obligation to remove legal content for adults

This ought to be the case, but is not. It is certainly true that the express ‘legal but harmful for adults’ duties in the original Bill were dropped after the 2022 Conservative leadership election. But when we drill down into the Act’s duties to remove illegal content, we find that they bake in a requirement to remove some legal content. This is for three distinct reasons (all set out in S.192):

1. The test for whether a platform must treat user content as illegal is “reasonable grounds to infer”. That is a relatively low threshold that will inevitably capture some content that is in fact legal.

2. Platforms have to make the illegality judgement on the basis of all relevant information reasonably available to them. Unavailable off-platform information may provide relevant context in demonstrating that user content is not illegal.

3. The Act requires the platform to ignore the possibility of a defence, unless it positively has reasonable grounds to infer that a defence may be successful. Grounds that do in fact exist may well not be apparent from the information available to the platform. 

In this way what ought to be false positives are treated as true positives. 

These issues are exacerbated if platforms are required to engage in proactive detection and removal of illegal content using automated technology.

The AVPA article calls out “detractors labeling it as an instrument of censorship that stifles online expression. This narrative completely misrepresents the Act’s purpose and effect”. One purpose of the Act is certainly to tackle illegal content. Purpose, however, is only the label on the legislative tin. Effect is what the tin contains. Inside this tin we find substantive obligations that will inevitably result in legal content being removed: not through over-caution, but as a matter of what the Act expressly requires.

The inevitable collateral damage to lawful speech embedded in the illegal content provisions has always been a concern to critics who have taken the time to read the Act.

Lawful speech untouched

The article suggests that the Act “ensur[es] lawful speech remains untouched”. However, the Act cannot ensure zero false positives. For many, perhaps most, offences, illegality cannot reliably be adjudged simply by looking at the post. Add to that the effects of S.192, and lawful speech will inevitably be affected to some degree. Later in the AVPA post the somewhat less ambitious claim is made that regulatory oversight will, as the regime matures and is better understood, ensure lawful expression isn’t unduly curtailed.(emphasis added)

These objections are not just an abstract matter of parsing the text of the Act. We can think of the current Ofcom consultation on proactive technology, in which Ofcom declines to set a concrete cap on false positives for its ‘principles-based’ measures. It acknowledges that:

“The extent of false positives will depend on the service in question and the way in which it configures its proactive technology. The measure allows providers flexibility in this regard, including as to the balance between precision and recall (subject to certain factors set out earlier in this chapter). We recognise that this could lead to significant variation in impact on users’ freedom of expression between services.[9.136] (emphasis added)

Section 22’s historic moment

Section 22 marks a historic moment as the first time Parliament has legislated an explicit duty for online services to protect users’ lawful speech and privacy, enforceable by Ofcom.” (underlining in the original)

This proposition is probably at the heart of AVPA’s argument. It goes on that Section 22 mandates that regulated platforms prioritize users’ freedom of expression and privacy”. 

And later: “While the UK lacks a single written constitution, Section 22 effectively strengthens free speech within the UK’s legal framework. It’s a tailored, enforceable safeguard for the digital age making platforms accountable for preserving expression while still tackling illegal content. Far from enabling, censorship the OSA through Section 22 sets a new standard for protecting online discourse.”

A casual reader might assume that Parliament had imposed a new self-standing duty on platforms to protect users’ lawful speech, akin to the general duty imposed on higher education institutions by the Higher Education (Freedom of Speech) Act 2023. That requires such institutions to:

“… take the steps that, having particular regard to the importance of freedom of speech, are reasonably practicable for it to take in order to … secur[e] freedom of speech within the law for— (a) staff of the provider, (b) members of the provider, (c) students of the provider, and (d) visiting speakers.”

The Section 22 duty is quite different: a subsidiary counter-duty intended to mitigate the impact on freedom of expression of the Act’s main safety (and, for Category 1 services, user empowerment) duties.

Thus it applies, as the AVPA article says: “when designing safety measures”. To be clear, it applies only to safety measures implemented in order to comply with the duties imposed by the Act. It has no wider, standalone application.

When that is appreciated, the reason why Parliament has not previously legislated such a duty is obvious. It has never previously legislated anything like an online safety duty – with its attendant risk of interference with users’ legitimate freedom of expression – which might require a mitigating provision to be considered.

Nor, it should be emphasised, does Section 22 override the Act’s express safety duties. It is no more than a “have particular regard to the importance of” duty. 

Moreover, the Section 22 duty is refracted through the prism of Ofcom's safety Codes: the Section 22 duty is deemed to be satisfied if a platform complies with safeguards set out in an Ofcom Safety Code of Practice. What those safeguards should consist of is for Ofcom to decide. 

The relevance of the Section 22 duty is, on the face of it, especially limited when it comes to the platform’s illegal content duties. The duty relates to the user’s right to “freedom of expression within the law”. Since illegal content is outside the law, what impact could the freedom of expression duty have? Might it encourage a platform to err on the side of the user when making marginal decisions about illegality? Perhaps. But a “have particular regard” duty does not rewrite the plain words of the Act prescribing how a platform has to go about making illegality judgements. Those (viz S.192) bake in removal of legal content.

All that considered, it is a somewhat bold suggestion that Section 22 marks a historic moment, or that it sets a new standard for protecting online discourse. Section 22 exists at all only because of the risk to freedom of expression presented by the Act’s safety duties.

The Section 22 protection goes beyond ECHR Article 10 rights.

The AVPA article says that “This is the first time UK domestic legislation explicitly protects online expression beyond the qualified rights under Article 10 of the European Convention on Human Rights (ECHR), as incorporated via the Human Rights Act 1998.” (emphasis added)

If this means only that the right referred to in Section 22 is something different from the ECHR Article 10 right, that has to be correct. However, it is not more extensive. The ‘within the law’ qualification renders the scope of the right narrower than the ECHR. ECHR rights can address overreaching domestic laws (and under the Human Rights Act a court can make a declaration of incompatibility). On the face of it the Section 22 protection cannot go outside domestic laws.

Section 22 improves on the ECHR’s reliance on court action.

Finally, the AVPA article says that “Unlike the ECHR which often requires costly and lengthy court action to enforce free speech rights, Section 22 embeds these protections directly into the regulatory framework for online platforms. Ofcom can proactively warn – and now has – or penalize platforms that over-block legal content ensuring compliance without requiring individuals to go to court. This makes the protection more immediate and practical, …

This is not the place to debate whether the possibility of action by a regulator is in principle a superior remedy to legal action by individuals. That raises questions not only about access to justice, but also about how far it is sensible to put faith in a regulator. The rising chorus of grumbles about Ofcom’s implementation of the Act might suggest 'not very'. But that would take us into the far deeper waters of the wisdom or otherwise of adopting a ‘regulation by regulator’ model. We don’t need to take that plunge today.

Ofcom has always emphasised that its supervision and enforcement activities are concerned with platforms’ systems and processes, not with individual content moderation decisions: “… our job is not to opine on individual items of content. Our job is to make sure that companies have the systems that they need” (Oral evidence to Speaker’s Conference, 3 September 2025).

To be sure, that has always seemed a bit of a stretch: how is Ofcom supposed to take a view on whether a platform’s systems and processes are adequate without considering examples of its individual moderation decisions? Nevertheless, it is not Ofcom’s function to take up individual complaints. A user hoping that Ofcom enforcement might be a route to reinstatement of their cherished social media post is liable to be disappointed.


Wednesday, 3 September 2025

Google v Russia: a hint of things to come

The outcome of Google’s complaint to the European Court of Human Rights in Google v Russia cannot be considered a surprise. The facts were so resoundingly against Russia that anything but a finding in Google’s favour would have had everyone reaching for the smelling salts.

Russia, following its resignation from the Council of Europe, chose not to participate in the case. We therefore have to be cautious about placing too much reliance on the Court’s reasoning. Nevertheless, the case is of interest not just for the main judgment, but for the concurring Opinion of Acting President Judge Pavli, who offered his reflections on how he would like the Court’s major online platform jurisprudence to develop in the future.

He speculates, for instance, that the Strasbourg court might at some point in the future – under the banner of securing freedom of expression – decide to require Member states, as a positive obligation under the Convention, to impose ‘right to a forum’ obligations on large platforms.

ECHR Article 10 and positive state obligations

For most Convention rights, positive state obligations (unlike protections against State action) do not exist automatically – the Court has to take the step of deciding that a positive obligation exists in specific circumstances. The position is summarised in Palomo Sanchez:

“58. …in addition to the primarily negative undertaking of a State to abstain from interference in the rights guaranteed by the Convention, “there may be positive obligations inherent” in those rights.

59.  This is also the case for freedom of expression, of which the genuine and effective exercise does not depend merely on the State’s duty not to interfere, but may require positive measures of protection, even in the sphere of relations between individuals. In certain cases, the State has a positive obligation to protect the right to freedom of expression, even against interference by private persons…”

Although the Strasbourg Court frequently invokes positive obligations, Article 10 remains an area in which it has so far been relatively cautious in finding that positive obligations exist especially where horizontal relations between private persons are concerned.

The reason for that is fairly obvious: freedom of expression is a highly sensitive area; and the effect of deploying a positive obligation (especially horizontally) is that Member states, rather than being free to make their own policy choices within boundaries set by the Court, must implement a particular policy devised by the Court (subject only to the latitude afforded to Member states by the ‘margin of appreciation’).

Some might think that a right to a forum is a good idea. Others might disagree. However, the prior question  on which opinions will also vary  is whether each Member state legislature gets to decide that policy question for itself, quite possibly coming to a variety of different answers, or whether the Strasbourg court gets to determine a uniform policy under the banner of securing Convention rights. 

Google v Russia – the facts

Turning to the Google v Russia case itself, two sets of facts came before the Strasbourg court: first, penalties imposed on Google for not complying with orders issued by the Russian telecoms regulator RKN to remove YouTube user content critical of the government and supporting the political opposition.

Second, a more complex history of Google barring a Russian state YouTube channel (Tsargrad) following the imposition of US and EU sanctions, then, after a Russian court order, reinstating it minus monetisation.  A Russian bailiff decided that the reinstatement did not comply with the order and imposed penalties greatly exceeding Tsargrad’s lost revenue. The local courts declined to interfere. Another 20 plaintiffs, predominantly Russian state channels, brought copycat claims in the Russian courts, with the result that by September 2022 the accumulated financial penalties were in the region of $16 trillion. Google Russia filed for bankruptcy in June 2022.

Applicability of Article 10

The Court held, following its Autronic decision, that Article 10 applies to everyone including legal entities and commercial profit-making companies. It observed that service providers perform an important role in facilitating access to information and debate on a wide range of political, social and cultural topics.

The Court had previously acknowledged in Tamiz that both Google and its end users enjoyed Article 10 rights. In Cengiz it had acknowledged that YouTube constituted a unique platform for freedom of expression.

Lastly, citing Özgür Radyo, it reiterated that any measure compelling a platform to restrict access to content under threat of penalty constitutes interference with freedom of expression.

RKN’s removal orders

The ECtHR majority decided that even assuming that the interference was genuinely in pursuit of a legitimate aim (as to which it was not satisfied), the government’s actions were not necessary in a democratic society. They observed that penalising Google LLC for hosting content critical of government policies or alternative views on military actions, without demonstrating a pressing social need for its removal, struck at the very heart of the internet’s function as a means for the free exchange of ideas and information. [80]

Specifically, the majority held:

-         The content that the authorities sought to suppress was political in nature [82]

-         The sanctions were disproportionate. By their nature and scale they were liable to have chilling effect on Google’s willingness to host content critical of authorities. The approach of the Russian authorities effectively required Google to act as censors of political speech on behalf of state authorities. [81]

-         The Russian domestic courts displayed a perfunctory approach to necessity and failed to examine the matter in the light of the Convention requirements. [82] ]

Existence of an interference Judge Pavli commented on the majority’s approach to the existence of an interference with the platform’s Article 10 rights:

“The Court considers that the imposition of such severe penalties, combined with the threat of further sanctions for non-compliance with [RKN takedown requests], exerted considerable pressure on Google LLC to censor content on YouTube, thereby interfering with its role as a provider of a platform for the free exchange of ideas and information.” [5]

He suggested [5] that this was a “novel interpretation”, lacking further elaboration of nature of the interference or the role of the applicant companies as holders of Art 10 rights.

He described the Tamiz decision as an Article 8 case that involved Google Inc. only indirectly and that centred primarily on the margin of appreciation afforded to the British courts. He suggested that that single sentence “did not provide a great deal of clarity as to how the Court views the role of such platforms under Article 10”.

That may have some force as a general observation. However, the majority found that there was interference with the means of dissemination. That does not seem especially novel or to require much, if any, elaboration. In Strasbourg caselaw, means of dissemination is a long-established mode of interference with freedom of expression:

“… any restriction imposed on [means of dissemination] necessarily interferes with the right to receive and impart information” (Yildirim (2012), citing Autronic (1990); Cengiz (2015)). 

It is not obvious how any further elaboration of the nature of the interference or the roles of the Google companies would have assisted in reaching the conclusion that an interference with the means of dissemination existed.

There could be a question as to who has standing to complain of an interference with means of dissemination: an affected user, the provider of the means of dissemination, or both.

In Yildirim an internet user with a website on Google Sites complained that the whole of Google Sites, including his site, had been blocked by the telecommunications authority following an order of a Turkish court. His site was not the subject of the original court order. Similarly, in Cengiz the complainants were three legal academics affected by a Turkish court order to block YouTube. In Autronic, a commercial company was denied a broadcast licence to show a Russian satellite TV channel at a trade fair.

Yildirim, the Cengiz complainants and Autronic were each held to have standing to complain to Strasbourg. The Court in Autronic pointed out that Article 10 itself “expressly mentions in the last sentence of its first paragraph (art. 10-1) certain enterprises essentially concerned with the means of transmission.”

Rights and responsibilities While Judge Pavli agreed that present case fell manifestly into the category of censorship, he embarked on a disquisition - under the title “Rights and Responsibilities of Major Online Platform Operators” - about possible duties and responsibilities of major platforms. He observed how they were no longer “mere” intermediaries, and increasingly used human and algorithmic tools for curating, moderating and monetising third-party content. He took as his cue a comment in the majority judgment:

“…At the same time, the Court notes that when internet intermediaries manage content available on their platforms or play a curatorial or editorial role, including through the use of algorithms, their important function in facilitating and shaping public debate engenders duties of care and due diligence, which may also increase in proportion to the reach of the relevant expressive activity…” [79]

However, neither the majority comment nor Judge Pavli’s additional observations have any obvious relevance to whether an interference existed in this case. Indeed, Judge Pavli acknowledged that the question raised by the RKN removal orders did not concern what obligations online hosting platforms might have, but what rights they enjoyed under Article 10 of the Convention.

It is perhaps unsurprising that the majority fastened on to the simplest, most obvious basis for its decision: disproportionate sanctions and domestic court failures.

Article 10(2) duties and responsibilities In any event, within Article 10 any ‘duties and responsibilities’ come into play, if at all, only as a factor at the second, 10(2), stage: necessity and proportionality of the state’s interference with someone’s Article 10 rights. As pointed out in the dissenting judgment of Judges Sajó and Tsotsoria in Delfi, Article 10(2) does not provide any basis for requiring the imposition of independent, standalone duties:

“The protection of freedom of expression cannot be turned into an exercise in imposing duties. The “duties and responsibilities” clause of Article 10 § 2 is not a stand-alone provision: it is inserted there to explain why the exercise of the freedom in question may be subject to restrictions, which must be necessary in a democratic society. It is only part of the balance that is required by Article 10 §2.” [38]

Thus any standalone platform duties could be imposed by Strasbourg only via the doctrine of Member state positive obligations.

Disinformation A clue to what may have lain behind Judge Pavli’s exegesis on duties and responsibilities lies in his opening comment: that Russia’s measures ostensibly concerned prevention of mass disinformation. [2] In a climate in which it is routinely said that platforms should have a duty to prevent dissemination of disinformation, an opportunity to explore that issue could be tempting. In another case it might be necessary to explore the ‘duties and responsibilities’ issues that can arise under Article 10(2), when considering the legitimate aim, necessity and proportionality of a state interference. This was not that case.

A positive State obligation? Even less would Google v Russia have been a suitable case in which to explore whether Strasbourg should require Member states to impose, via the doctrine of positive obligations, a self-standing duty on large platforms to take steps to prevent disinformation. It is not entirely clear if a positive obligation is what Judge Pavli was contemplating for the future. He starts:

“There is growing recognition that respect for fundamental rights online, and in particular freedom of expression and information, requires responsible practices by providers of major intermediary services”. [8] (emphasis added)

That might perhaps imply a positive obligation on a Member state to legislate. But he concludes:

“it may be considered permissible, in principle, for states to impose on major providers certain due-diligence obligations that seek to promote a safe online environment and to prevent turning their platforms into conduits for the large-scale dissemination of harmful content. In some context, such as elections, these safeguards may prove essential for the protection of democracy itself.” [8] (emphasis added)

If Judge Pavli’s point is only that a Member state’s imposition of due diligence obligations (if clearly and precisely defined, circumscribed and capable of being implemented proportionately – no small hurdle, it should be said) may in principle be compatible with the Convention, that is an unexceptional conclusion.

As to whether or not to impose such obligations, Member states are generally free to make their own policy choices within the constraints of the Convention. But if Judge Pavli is suggesting that (for instance) safeguards regarding elections might constitute a positive Convention obligation for Member states, that is a different matter.

EU Digital Services Act Judge Pavli also noted the EU Digital Services Act due-diligence obligations, although to what end is not clear. The mere existence of a domestic law (even an EU law that in Strasbourg caselaw benefits from a presumption of ECHR compatibility) should not be taken to imply a Convention ‘ought’.

Tsargrad

The second set of facts before the court raised the converse issue to the first: penalties imposed for not hosting user content. The penalties were for breach of a court order requiring reinstatement of a previously terminated YouTube channel.

Tsargrad sued in the Russian courts for wrongful termination of its contract with Google. Ultimately, the only issue before the courts was whether foreign sanctions invoked as grounds for termination complied with Russian public order. Following an unsuccessful appeal, Google restored the Tsargrad account but without monetisation. Penalties ensued.

Existence of an interference On these facts the preliminary question of whether there was an interference with Google’s Article 10 rights was more complex than for the RKN removal orders. The Court reasoned that:

-         Freedom of expression may encompass a right not to be compelled to express oneself. [90]

-         A holistic approach to freedom of expression encompasses both the right to express ideas and the right to remain silent [90]

-         The Russian court order constituted compulsion to host specific content, backed by financial penalties. That:

o   Directly impacted Google’s right to determine what content it was prepared to host on its platform

o   Fell within Article 10 – as with the RKN removal orders, the means of transmission is protected as well as content (Autronic)

Prescribed by law As to whether the interference was prescribed by law, Google argued that the quantum of the penalties far exceeded previous practice and any loss that might have been suffered. The Court had serious doubts on the point, but held that in any event the interference not justified.

Necessity and proportionality The Court was prepared to assume that the interference had the legitimate aim of protecting Tsargrad’s rights not to be subject to unlawful suspension due to sanctions contrary to public order. However, the interference was not necessary in a democratic society:

-         Where domestic law does not require proportionality in the context of excessive sanctions, or where the damages are manifestly disproportionate, there is a risk of creating a “chilling effect” on freedom of expression [96]

-         The Court noted inconsistencies raising doubts as to whether the measures pursued any genuine “pressing social need”.  Specifically, while purporting to defend freedom to receive information in Tsargrad’s case, the Russian authorities were simultaneously demanding that Google remove content critical of government policies. [97]

-         Penalties were manifestly disproportionate, reaching astronomical sums that bore no relationship to any harm suffered by Tsargrad. Copycat claims by State-owned media outlets increased the penalties to USD 16 trillion. Google’s Russian subsidiary had to be shut down. [98]

-         The domestic authorities were determined to continue recovery even after compliance with the obligation to restore access. The bailiff procedure, conducted within 24 hours without notice to Google, effectively expanded the scope of the order, raising concerns of bad faith. The process was incompatible with legal certainty. [99]

The grossly disproportionate penalties and bad faith enforcement demonstrated disproportionate interference and thus an Article 10 violation.

Penalties or substance? Judge Pavli disagreed with the majority’s approach to necessity and proportionality. In his view the majority was wrong to focus on proportionality of the sanctions: the main reason for failing the necessity test was the Russian courts’ failure to address the Article 10 rights of Google (or indeed of Tsargrad as a user) and to give relevant or sufficient reasons; the sanctions were secondary. [13]

However, he stressed that it was inconsistent with Art 10 for States to force private service providers to collaborate in policing and censoring speech that is clearly protected by the Convention.

Right to a forum Judge Pavli again embarked on a broader discussion, this time under the title “The Next Frontier – Right to a Forum and Procedural Safeguards for Users”.

His previous reference to Tsargrad’s Article 10 user rights foreshadows these comments, in which he speaks of user rights not only as something to be protected against state interference, but something enjoyed by users viz a viz platforms.

Thus, for Judge Pavli it was also of ‘some relevance’ that Russian law doesn’t grant users any due process as against the platform, in contrast to e.g. the EU Digital Services Act.

One might ask why that would be of any relevance to a case in which the Russian courts had upheld and enforced Tsargrad’s claim, not rejected it.  But at any rate, lack of platform due process mechanisms could come into play in a Strasbourg complaint brought by a user whose claim to have been wrongly excluded by a platform was rejected by the local courts.

The immediate problem with a complaint on those grounds is that the Convention does not confer any direct right on a private person to complain about action taken by another private person. The interference has to be attributed in some way to a member State. Such a complaint could only be formulated as breach of a positive obligation on a Member state to legislate for platform due process mechanisms, or in some other fashion to enact a ‘right to a forum’.  

Appleby The obstacle in the way of that approach is Appleby, a 2003 case concerning refusal of access to a privately owned shopping mall to set up a stall collecting signatures for a petition against a proposed building development. The Court expressly rejected a positive obligation on a state to secure such a right to a forum. It observed:

“The issue to be determined is whether the respondent State has failed in any positive obligation to protect the exercise of the applicants’ Article 10 rights from interference by others – in this case, the owner of the Galleries.”

“However, while freedom of expression is an important right, it is not unlimited. Nor is it the only Convention right at stake. Regard must also be had to the property rights of the owner of the shopping centre under Article 1 of Protocol No. 1.”

In Google v Russia Judge Pavli contemplated a future revisit of this long-standing Strasbourg caselaw:

“These issues are largely novel, and I believe in the long run will require the Court to revisit its “right of forum” doctrine as established in the 2003 case of Appleby and Others v. the United Kingdom (no. 44306/98, ECHR 2003-VI).”

He added:

“15. Our own Article 10 case-law on user rights remains rather limited at this juncture. Judging from the above trends, however, it is most likely only a matter of time before the Court is called upon to resolve disputes between the conflicting Article 10 and/or commercial interests of private online platforms, on the one hand, and their users, on the other – including the key question whether a right to a forum ought to exist in this context.

The question will undoubtedly be of great importance for the future of democratic discourse in our societies. Seen from this contemporary perspective, the Appleby principles will need to be revisited, as they are not fit, in my view, for the current online environment.

A small-town shopping mall from 1998 is a long way from the YouTube of 2025. To begin with, unlike the brick‑and‑mortar shopping malls of yesteryear, many of today’s large online platforms are squarely in the information business. More importantly, the debate on the availability of alternative fora of expression will also be much more complex.

16.  The Court will be called upon to assess whether major online platforms that are important for the free flow of information in our societies can be assimilated to the kind of public spaces to which everyone must have unhindered access. Whatever the answer to that question – and whatever rights Article 10 itself may (or may not) confer on users in that regard – it seems reasonable to assume that States will have a sufficiently strong interest in requiring large platforms to provide at least certain basic due-process safeguards aimed at protecting users – the powerful, the famous or just ordinary citizens – from arbitrary exclusion from the marketplace of ideas.”

Many might welcome the prospect of reopening Appleby and inviting the Court to devise rules for large platforms as public spaces.  Enticing new vistas of Big Tech policy advocacy would open up, refracted through the panoramic lens of human rights and conducted under the benevolent gaze of the Strasbourg Court. Whether extending the Court’s Article 10 role further from boundary-setter to meta-legislator would be universally welcomed is another matter.

[4 September 2025 Amended final sentence.]


Monday, 4 August 2025

Ofcom’s proactive technology measures: principles-based or vague?

Ofcom has published its long-expected consultation on additional measures that it recommends U2U platforms and search engines should implement to fulfil their duties under the Online Safety Act.  The focus, this time, is almost entirely on proactive technology: automated systems intended to detect particular kinds of illegal content and content harmful to children, with a view to blocking or swiftly removing them.

The consultation marks a further step along the UK’s diverging path from the EU Digital Services Act. The DSA prohibits the imposition of general monitoring obligations on platforms. Those are just the kind of obligations envisaged by the Online Safety Act’s preventative duties, which Ofcom is gradually fleshing out and implementing.

Ofcom finalised its first Illegal Harms Code of Practice in December 2024. For U2U services the Code contained two proactive technology recommendations: hash and URL matching for CSAM. The initial consultation had also suggested fuzzy keyword matching to detect some kinds of fraud, but Ofcom did not proceed with that. The regulator indicated that it would revisit fraud detection in a later, broader consultation. That has now arrived.

The new U2U proposals go beyond fraud. They propose perceptual hash-matching for visual terrorism content and for intimate image abuse content. They suggest that content should be excluded from recommender feeds if there are indications that it is potentially illegal, unless and until it is determined via content moderation to be legal. 

Most ambitiously, Ofcom wants certain relatively large platforms to research the availability and suitability (in accordance with proposed criteria) of proactive technology for detection of fraud and some other illegal behaviour, then implement it if appropriate. Those platforms would also have to review existing technologies that they use for these purposes and, if feasible, bring them into line with Ofcom’s criteria.

Ofcom calls this a ‘principles-based’ measure, probably because it describes a qualitative evaluation and configuration process rather than prescribing any concrete parameters within which the technology should operate.

Freedom of expression

Legal obligations for proactive content detection, blocking and removal engage the fundamental freedom of expression rights of users. Obligations must therefore comply with ECHR human rights law, including requirements of clarity and certainty.

Whilst a principles-based regime may be permissible, it must nevertheless be capable of predictable application. Otherwise it will stray into impermissible vagueness. Lord Sumption in Catt said that what is required is a regime the application of which is:

“reasonably predictable, if necessary with the assistance of expert advice. But except perhaps in the simplest cases, this does not mean that the law has to codify the answers to every possible issue which may arise. It is enough that it lays down principles which are capable of being predictably applied to any situation."

In Re Gallagher he said that:

“A measure is not “in accordance with the law” if it purports to authorise an exercise of power unconstrained by law. The measure must not therefore confer a discretion so broad that its scope is in practice dependent on the will of those who apply it, rather than on the law itself. Nor should it be couched in terms so vague or so general as to produce substantially the same effect in practice.”

Typically these strictures would apply to powers and duties of public officials. The Online Safety Act is different: it requires U2U service providers to make content decisions and act (or not) to block or remove users’ posts. Thus the legal regime that requires them to do that has to provide sufficient predictability of their potential decisions and resulting acts.

In addition to fraud and financial services offences, Ofcom’s proposed principles-based measures would apply to image based CSAM, CSAM URLs, grooming, and encouraging or assisting suicide (or attempted suicide).

Any real-time automated content moderation measure poses questions about human rights compatibility. The auguries are not promising: proactive technology, armed only with the user’s post and perhaps some other on-platform data, will always lack contextual information. For many offences off-platform information can be the difference between guilt and innocence.  Decisions based on insufficient information inevitably stray into arbitrariness.

Then there is the trade-off between precision and recall. Typically, the more target content the automated tool is tuned to catch, the more false positives it will also throw up. False positives result in collateral damage to legitimate speech. It does not take many false positives to constitute disproportionate interference with users’ rights of freedom of expression.

Lord Grade, the Chairman of Ofcom, said in a recent speech that the aims of tackling criminal material and content that poses serious risks of harm to children’s physical or emotional health were not in conflict with freedom of expression. Indeed so, but focusing only on the aim misses the point: however worthy the end, it is the means - in this case proactive technology - that matters.

Prescribed by law

Ofcom’s Proactive Technology Draft Guidance says this about proportionality of the proposed measures:

“Proactive technology used for detection of harmful content involves making trade-offs between false positives and false negatives. Understanding and managing those trade-offs is essential to ensure the proactive technology performs proportionately, balancing the risk of over-removal of legitimate content with failure to effectively detect harm.” (para 5.14)

Proportionality is a requirement of human rights compliance. However, before considering proportionality a threshold step has to be surmounted: the ‘prescribed by law’ or ‘legality’ condition. This is a safeguard against arbitrary restrictions - laws should be sufficiently precise and certain that they have the quality of law.

The prescribed by law requirement is an aspect of the European Convention on Human Rights. It has also been said to be a UK constitutional principle that underpins the rule of law:

"The acceptance of the rule of law as a constitutional principle requires that a citizen, before committing himself to any course of action, should be able to know in advance what are the legal consequences that will flow from it." (Lord Diplock, Black-Clawson [1975])

The Constitutional Reform Act 2005 refers in S.1 to:

“the existing constitutional principle of the rule of law”.

For content monitoring obligations the quality of law has two facets, reflecting the potential impact of the obligations on the fundamental rights of both platforms and users.

The platform aspect is written in to the Act itself:

“the measures described in the code of practice must be sufficiently clear, and at a sufficiently detailed level, that providers understand what those measures entail in practice”. (Schedule 4)

The user aspect is not spelled out in the Act but is no less significant for that. Where a user’s freedom of speech may be affected by steps that a platform takes to comply with its duties, any interference with the user’s right of freedom of expression must be founded on a clear and precise rule.

That means that a user must be able to foresee in advance with reasonable certainty whether something that they have in mind to post is or is not liable to be blocked, removed or otherwise affected as a result of the obligations that the Act places on the platform.

That is not simply a matter of users themselves taking care to comply with substantive law when they consider posting content. The Act interpolates platforms into the process and may require them to make judgements about whether the user’s post is or is not illegal. Foreseeability is therefore a function both of the substantive law and of the rules about how a platform should make those judgements.

If, therefore, the mechanism set up by the Act and Ofcom for platforms to evaluate, block and take down illegal content is likely to result in unpredictable, arbitrary determinations of what is and is not illegal, then the mechanism fails the ‘prescribed by law’ test and is a per se violation of the right of freedom of expression.

Equally, if the regime is so unclear about how it would operate in practice that a court is not in a position to assess its proportionality, that would also fail the ‘prescribed by law’ test. That is the import of Lord Sumption’s observations in Catt and Gallagher (above).

A prescriptive bright-line rule, however disproportionate it might be, would satisfy the ‘prescribed by law’ test and fall to be assessed only by reference to necessity and proportionality. Ofcom’s principles-based recommendations, however, are at the opposite end of the spectrum: they are anything but a bright-line rule. The initial ‘prescribed by law’ test therefore comes into play.

How do Ofcom’s proposed measures stack up?

Service providers themselves would decide how accurate the technology has to be, what proportion of content detected by the technology should be subjected to human review, and what is an acceptable level of false positives.

Whilst Ofcom specifies various ‘proactive technology criteria’, those are expressed as qualitative factors to be taken into account, not quantitative parameters. Ofcom does not specify what might be an appropriate balance between precision and recall, nor what is an appropriate proportion of human review of detected content.

Nor does Ofcom indicate what level of false positives might be so high as to render the technology (alone, or in combination with associated procedures) insufficiently accurate.

Examples of Ofcom’s approach include:

“However, there are some limitations to the use of proactive technology in detecting or supporting the detection of the relevant harms. For example, proactive technology does not always deal well with nuance and context in the same way as humans.

However, we mitigate this through the proactive technology criteria which are designed to ensure proactive technology is deployed in a way that ensures an appropriate balance between precision and recall, and that an appropriate proportion of content is reviewed by humans.” (Consultation, para 9.92)

“Where a service has a higher tolerance for false positives, more content may be wrongly identified. … The extent of false positives will depend on the service in question and the way in which it configures its proactive technology. The measure allows providers flexibility in this regard, including as to the balance between precision and recall (subject to certain factors set out earlier in this chapter).” (Consultation, paras 9.135, 9.136)

“… when determining what is an appropriate proportion of detected content to review by humans, providers have flexibility to decide what proportion of detected content it is appropriate to review, however in so doing, providers should ensure that the following matters are taken into account…” (Consultation, para 9.19)

“However, in circumstances where false positives are consistently high and cannot be meaningfully reduced or mitigated, particularly where this may have a significant adverse impact on user rights, providers may conclude that the proactive technology is incapable of meeting the criteria.” (Proactive Technology Draft Guidance, para 5.19)

How high is high? How significant is significant? No answer is given, other than that the permissible level of false positives is related to the nature of the subsequent review of detected content. As we shall see, the second stage review does not require all content detected by the proactive technology to be reviewed by human beings. The review could, seemingly, be conducted by a second automated system.

The result is that two service providers in similar circumstances could arrive at completely different conclusions as to what constitutes an acceptable level of legitimate speech being blocked or taken down. Ofcom acknowledges that the flexibility of its scheme:

“could lead to significant variation in impact on users’ freedom of expression between services”. (Consultation, para 9.136)

That must raise questions about the predictability and foreseeability of the regime.

If the impact on users’ expression is not reasonably foreseeable, that is a quality of law failure and no further analysis is required. If that hurdle were surmounted, there is still the matter of what level of erroneous blocking and removal would amount to a disproportionate level of interference with users’ legitimate freedom of expression. 

Proportionality?

Ofcom concludes that:

“Having taken account of the nature and severity of the harms in question, the principles we have built into the measure to ensure that the technology used is sufficiently accurate, effective and lacking in bias, and the wider range of safeguards provided by other measures, we consider overall that the measure’s potential interference to users’ freedom of expression to be proportionate.” (Consultation, para 9.154)

However, it is difficult to see how Ofcom (or anyone else) can come to any conclusion as to the overall proportionality of the recommended principles-based measures when they set no quantitative or concrete parameters for precision versus recall, accuracy of review of suspect content, or an ultimately acceptable level of false positives.

Ofcom’s discussion of human rights compliance starts with proportionality. While it notes that the interference must be ‘lawful’, there is no substantive discussion of the ‘prescribed by law’ threshold.

Prior restraint

Finally, on the matter of human rights compatibility, proactive detection and filtering obligations constitute a species of prior restraint (Yildirim v Turkey (ECtHR), Poland v The European Parliament and Council (CJEU)).

Prior restraint is not impermissible. However, it does require the most stringent scrutiny and circumscription, in which risk of removal of legal content will loom large. The ECtHR in Yildirim noted that “the dangers inherent in prior restraints are such that they call for the most careful scrutiny on the part of the Court”.

The proactive technology criteria

Ofcom’s proactive technology criteria are, in reality, framed not as a set of criteria but as a series of factors that the platform should take into account.  Ofcom describes them as “a practical, outcomes-focused set of criteria.” [Consultation, para 9.13]

Precision and recall One criterion is that the technology has been evaluated using “appropriate” performance metrics and

“configured so that its performance strikes an appropriate balance between precision and recall”.  (Recommendation C11.3(c))

Ofcom evidently must have appreciated that, without elaboration, “appropriate” was an impermissibly vague determinant. The draft Code of Practice goes on (Recommendation C11.4(a)):

“when configuring the technology so that it strikes an appropriate balance between precision and recall, the provider should ensure that the following matters are taken into account:

i) the service’s risk of relevant harm(s), reflecting the risk assessment of the service and any information reasonably available to the provider about the prevalence of target illegal content on the service;

ii) the proportion of detected content that is a false positive;

iii) the effectiveness of the systems and/or processes used to identify false positives; and

iv) in connection with CSAM or grooming, the importance of minimising the reporting of false positives to the National Crime Agency (NCA) or a foreign agency;”

These factors may help a service provider tick the compliance boxes – ‘Yes, I have taken these factors into account’ - but they do not amount to a concrete determinant of what constitutes an appropriate balance between precision and recall.

Review of detected content Accuracy of the proactive technology is, as already alluded to, only the first stage of the recommended process. The service provider has to treat a detected item as providing ‘reason to suspect’ that it is illegal content, then move on to a second stage: review.

“Where proactive technology detects or supports the detection of illegal content and/or content harmful to children, providers should treat this as reason to suspect that the content may be target illegal content and/or content harmful to children.

Providers should therefore take appropriate action in line with existing content moderation measures, namely ICU C1 and ICU C2 (in the Illegal Content User-to-user Codes of Practice) and PCU C1 and PCU C2 (in the Protection of Children User-to-user Code of Practice), as applicable.” (Consultation, para 9.74)

That is reflected in draft Codes of Practice paras ICU C11.11, 12.9 and PCU C9.9, 10.7. For example:

“ICU C11.11 Where proactive technology detects, or supports the detection of, target illegal content in accordance with ICU C11.8(a), the provider should treat this as reason to suspect that the content may be illegal content and review the content in accordance with Recommendation ICU C1.3.”

‘Review’ does not necessarily mean human review. Compliance with the proactive technology criteria requires that:

“...policies and processes are in place for human review and action is taken in accordance with that policy, including the evaluation of outputs during development (where applicable), and the human review of an appropriate proportion of the outputs of the proactive technology during deployment. Outputs should be explainable to the extent necessary to support meaningful human judgement and accountability.” (Emphasis added) (draft Code of Practice Recommendation ICU C11.3(g))

The consultation document says:

“It should be noted that this measure does not itself recommend the removal of detected content. Rather, it recommends that providers moderate detected content in accordance with existing content moderation measures (subject to human review of an appropriate proportion of detected content, as mentioned above).” (Consultation, para 9.147)

And:

“Providers have flexibility in deciding what proportion of detected content is appropriate to review, taking into account [specified factors]” (Consultation, para 9.145)

Ofcom has evidently recognised that “appropriate proportion” is, without elaboration, another impermissibly vague determinant. It adds (Recommendation C11.4(b)):

“when determining what is an appropriate proportion of detected content to review by humans, the provider should ensure that the following matters are taken into account:

i) the principle that the resource dedicated to review of detected content should be proportionate to the degree of accuracy achieved by the technology and any associated systems and processes;

ii) the principle that content with a higher likelihood of being a false positive should be prioritised for review; and

iii) in the case of CSAM or grooming, the importance of minimising the reporting of false positives to the NCA or a foreign agency.”

As with precision and recall, these factors may help a service provider tick the compliance boxes but are not a concrete determinant of the proportion of detected content to be submitted to human review in any particular case.

Second stage review – human, more technology or neither?

The upshot of all this appears to be that content detected by the proactive technology should be subject to review in accordance with the Code of Practice moderation recommendations; and that an ‘appropriate proportion’ of that content should be subject to human review.

But if only an ‘appropriate proportion’ of content detected by the proactive technology has to be subject to human review, how is the rest to be treated? Since it appears that some kind of ‘appropriate action’ is contemplated in accordance with Ofcom’s content moderation recommendations, the implication appears to be that moderation at the second stage could be by some kind of automated system.

In that event it would seem that the illegal content judgement itself would be made by that second stage technology in accordance with Recommendation C1.3.

Recommendation C1.3, however, does not stipulate the accuracy of second stage automated technology. The closest that the Code of Practice comes is ICU C4.2 and 4.3:

“The provider should set and record performance targets for its content moderation function, covering at least:

a) the time period for taking relevant content moderation action; and

b) the accuracy of decision making.

In setting its targets, the provider should balance the need to take relevant content moderation action swiftly against the importance of making accurate moderation decisions.”

Once again, the path appears to lead to an unpredictable balancing exercise by a service provider.

Curiously, elsewhere Ofcom appears to suggest that second stage “complementary tools” could in some cases merely be an ‘additional safeguard’:

“What constitutes an appropriate balance between precision and recall will depend on the nature of the relevant harm, the level of risk identified and the service context. For example, in some cases a provider might optimise for recall to maximise the quantity of content detected and apply additional safeguards, such as use of complementary tools or increased levels of human review, to address false positives. In other cases, higher precision may be more appropriate, for example, to reduce the risk of adverse impacts on user rights.” (Proactive Technology Draft Guidance, para 5.18)

If the implication of ‘in some cases’ is that in other cases acting on the output of the proactive technology without a second stage review would suffice, that would seem to be inconsistent with the requirement that all detected content be subject to some kind of moderation in accordance with Recommendation C1.3.

Moreover, under Ofcom’s scheme proactive technology is intended only to provide ‘reason to suspect’ illegality. That would not conform to the standard stipulated by the Act for an illegal content judgement: ‘reasonable grounds to infer’.

Conclusion

When, as Ofcom recognises, the impact on users’ freedom of expression will inevitably vary significantly between services, and Ofcom’s documents do not condescend to what is or is not an acceptable degree of interference with legitimate speech, it is difficult to see how a user could predict, with reasonable certainty, how their posts are liable to be affected by platforms’ use of proactive technology in compliance with Ofcom’s principles-based recommendations.

Nor is it easy to see how a court would be capable of assessing the proportionality of the measures. As Lord Sumption observed, the regime should not be couched in terms so vague or so general as, substantially, to confer a discretion so broad that its scope is in practice dependent on the will of those who apply it. Again, Ofcom's acknowledgment that the flexibility of its scheme could lead to significant variation in impact on users’ freedom of expression does not sit easily with that requirement.  

Ofcom, it should be acknowledged, is to an extent caught between a rock and a hard place. It has to avoid being overly technology-prescriptive, while simultaneously ensuring that the effects of its recommendations are reasonably foreseeable to users and capable of being assessed for proportionality. Like much else in the Act, that may in reality be an impossible circle to square. That does not bode well for the Act’s human rights compatibility.

[Amended 6 August 2025 to add ‘principles-based’ to the first paragraph of the Conclusion.]