A blogpost headed “The Online Safety Act: A First Amendment for the UK” lacks nothing in shock value. The one thing that we might have thought supporters and critics alike were agreed upon is that the Act, whether for good or ill, does not – and is certainly not intended to – enact the US First Amendment. If anything, it is seen as an antidote to US free speech absolutism.
We might
even be tempted to dismiss it as parody. But look more closely and this comes
from a serious source: the Age Verification Providers Association. So,
perforce, we ought to treat the claim seriously.
The title, we
must assume, is a rhetorical flourish rather than a literal contention that the
Act compares with an embedded constitutional right capable of striking down
incompatible legislation. Drilling down beyond the title, we find the slightly less
adventurous claim that Section 22 of the Act (not the Act as a whole)
constitutes “A new First Amendment for the UK”.
What is Section
22?
Section 22 places
a duty on platforms to “have particular regard to the importance of protecting
users’ right to freedom of expression within the law” when implementing the
Act’s safety duties.
Like so much
in the Act, this duty is not everything that it might seem at first sight. One
thing it obviously is not is a constitutional protection conferring the power
to invalidate legislation. So then, what are AVPA’s arguments that Section
22, even loosely speaking, is a UK First Amendment?
By way of
preliminary, the article’s focus is mostly on the Act’s illegality duties: “The
Act’s core aim is to enhance online safety by requiring user-to-user services
(e.g. social media platforms) to address illegal content…” and “Far
from censoring legal content, the OSA targets only illegal material,
…”.
That might come
as news to those who were under the impression that the Act’s core aim was
protection of children. The children’s duties do target some kinds of material
that are legal but considered to be harmful for under-18s. But let us mentally
put the children’s duties to one side and concentrate on the illegality duties.
Those require blocking or removal of user content, as opposed to hiding it
behind age gates.
AVPA’s arguments boil down to five main points:
- The Act imposes no obligation to remove legal content for adults.
- The Act’s obligations leave lawful speech untouched (or at least not unduly curtailed).
- Section 22 is a historic moment, being the first time that Parliament has legislated an explicit duty for online services to protect users’ lawful speech, enforceable by Ofcom.
- The Section 22 protection goes beyond ECHR Article 10 rights.
- Section 22 improves on the ECHR’s reliance on court action.
Taking these
in turn:
No
obligation to remove legal content for adults
This ought to be the case, but is not. It is certainly true that the express ‘legal but harmful for adults’ duties in the original Bill were dropped after the 2022 Conservative leadership election. But when we drill down into the Act’s duties to remove illegal content, we find that they bake in a requirement to remove some legal content. This is for three distinct reasons (all set out in S.192):
1. The test for whether a platform must treat user content as illegal is “reasonable grounds to infer”. That is a relatively low threshold that will inevitably capture some content that is in fact legal.
2. Platforms have to make the illegality judgement on the basis of all relevant information reasonably available to them. Unavailable off-platform information may provide relevant context in demonstrating that user content is not illegal.
3. The Act requires the platform to ignore the possibility of a defence, unless it positively has reasonable grounds to infer that a defence may be successful. Grounds that do in fact exist may well not be apparent from the information available to the platform.
In this way what ought to be false positives are treated as true positives.
These issues
are exacerbated if platforms are required to engage in proactive detection
and removal of illegal content using automated technology.
The AVPA article calls
out “detractors labeling it as an instrument of censorship that stifles
online expression. This narrative completely misrepresents the Act’s purpose
and effect”. One purpose of the Act is certainly to tackle illegal content.
Purpose, however, is only the label on the legislative tin. Effect is what the tin contains. Inside this tin we
find substantive obligations that will inevitably result in legal content being removed: not through over-caution, but as a matter of what the Act expressly requires.
The inevitable collateral damage to lawful speech embedded in the illegal content provisions has always been a concern to critics who have taken the time to read the Act.
Lawful
speech untouched
The article
suggests that the Act “ensur[es] lawful speech remains untouched”. However, the Act cannot ensure zero false positives. For many, perhaps most, offences, illegality cannot reliably be adjudged simply by looking at the post. Add to that the effects of S.192, and lawful speech will inevitably be affected to some degree. Later
in the AVPA post the somewhat less ambitious claim is made that “regulatory
oversight will, as the regime matures and is better understood, ensure lawful
expression isn’t unduly curtailed.” (emphasis added)
These
objections are not just an abstract matter of parsing the text of the Act. We
can think of the current Ofcom consultation on proactive technology, in which
Ofcom declines to set a concrete cap on false positives for its ‘principles-based’
measures. It acknowledges that:
“The extent of false positives will depend on the service in
question and the way in which it configures its proactive technology. The
measure allows providers flexibility in this regard, including as to the
balance between precision and recall (subject to certain factors set out
earlier in this chapter). We recognise that this could lead to significant
variation in impact on users’ freedom of expression between services.”
[9.136] (emphasis added)
Section
22’s historic moment
“Section 22 marks a historic moment as
the first time Parliament has legislated an explicit duty for online
services to protect users’ lawful speech and privacy, enforceable by Ofcom.” (underlining in the original)
This proposition
is probably at the heart of AVPA’s argument. It goes on that
And later: “While the UK lacks a single written constitution, Section 22 effectively strengthens free speech within the UK’s legal framework. It’s a tailored, enforceable safeguard for the digital age making platforms accountable for preserving expression while still tackling illegal content. Far from enabling, censorship the OSA through Section 22 sets a new standard for protecting online discourse.”
A casual
reader might assume that Parliament had imposed a new self-standing duty on
platforms to protect users’ lawful speech, akin to the general duty imposed on
higher education institutions by the Higher Education (Freedom of Speech) Act
2023. That requires such institutions to:
“… take the steps that, having particular regard to the
importance of freedom of speech, are reasonably practicable for it to take in
order to … secur[e] freedom of speech within the law for— (a) staff of the
provider, (b) members of the provider, (c) students of the provider, and (d)
visiting speakers.”
The Section
22 duty is quite different: a subsidiary counter-duty intended to mitigate the
impact on freedom of expression of the Act’s main safety (and, for Category 1
services, user empowerment) duties.
Thus it
applies, as the AVPA article says: “when designing safety measures”. To
be clear, it applies only to safety measures implemented in order to comply
with the duties imposed by the Act. It has no wider, standalone application.
When that is
appreciated, the reason why Parliament has not previously legislated such a
duty is obvious. It has never previously legislated anything like an online
safety duty – with its attendant risk of interference with users’ legitimate
freedom of expression – which might require a mitigating provision to be
considered.
Nor, it should be emphasised, does Section 22 override the Act’s express safety duties. It is no more than a “have particular regard to the importance of” duty.
Moreover, the Section 22 duty is refracted through the prism of Ofcom's safety Codes: the Section 22 duty is deemed to be satisfied if a platform complies with safeguards set out in an Ofcom Safety Code of Practice. What those safeguards should consist of is for Ofcom to decide.
The relevance
of the Section 22 duty is, on the face of it, especially limited when it comes
to the platform’s illegal content duties. The duty relates to the user’s right
to “freedom of expression within the law”. Since illegal content is outside the
law, what impact could the freedom of expression duty have? Might it encourage
a platform to err on the side of the user when making marginal decisions about
illegality? Perhaps. But a “have particular regard” duty does not rewrite the
plain words of the Act prescribing how a platform has to go about making
illegality judgements. Those (viz S.192) bake in
removal of legal content.
All that
considered, it is a somewhat bold suggestion that Section 22 marks a historic moment, or that it sets a new standard for protecting online discourse. Section 22 exists at all only because of the risk to freedom of expression presented by
the Act’s safety duties.
The
Section 22 protection goes beyond ECHR Article 10 rights.
The AVPA
article says that “This is the first time UK domestic legislation explicitly
protects online expression beyond the qualified rights under Article 10
of the European Convention on Human Rights (ECHR), as incorporated via the
Human Rights Act 1998.” (emphasis added)
If this
means only that the right referred to in Section 22 is something different from
the ECHR Article 10 right, that has to be correct. However, it is not more
extensive. The ‘within the law’ qualification renders the scope of the right
narrower than the ECHR. ECHR rights can address overreaching
domestic laws (and under the Human Rights Act a court can make a declaration of
incompatibility). On the face of it the Section 22 protection cannot go outside
domestic laws.
Section
22 improves on the ECHR’s reliance on court action.
Finally, the AVPA article says that “Unlike the ECHR which often requires costly and lengthy
court action to enforce free speech rights, Section 22 embeds these protections
directly into the regulatory framework for online platforms. Ofcom can
proactively warn – and now has – or penalize platforms that over-block legal
content ensuring compliance without requiring individuals to go to court. This
makes the protection more immediate and practical, …”
This is not
the place to debate whether the possibility of action by a
regulator is in principle a superior remedy to legal action by individuals. That raises questions not only about access to justice, but also about how far it is sensible to put faith in a regulator. The rising chorus of grumbles about Ofcom’s implementation of the Act might suggest 'not very'. But that would
take us into the far deeper waters of the wisdom or otherwise of adopting a
‘regulation by regulator’ model. We don’t need to take that plunge today.
Ofcom has always emphasised that its supervision and enforcement activities are
concerned with platforms’ systems and processes, not with individual content moderation
decisions: “… our job is not to opine on individual items of content. Our job
is to make sure that companies have the systems that they need” (Oral evidence
to Speaker’s Conference, 3 September 2025).
To be sure, that has always seemed a bit of a stretch: how is Ofcom supposed to take a view on whether a platform’s systems and processes are adequate without considering examples of its individual moderation decisions? Nevertheless, it is not Ofcom’s function to take up individual complaints. A user hoping that Ofcom enforcement might be a route to reinstatement of their cherished social media post is liable to be disappointed.
No comments:
Post a Comment
Note: only a member of this blog may post a comment.