A Parliamentary petition calling for repeal of the Online Safety Act has reached over 550,000 signatures and is due to be debated on 15 December.
The demand for abolition is eye-catching, but inevitably lacks
nuance. In any case it is tolerably clear that the petition is aimed not at the
entire Act, but at its core: the set of regulatory safety duties imposed on
platforms and search engines. Even for those elements, the petition envisages not
bare repeal but repeal and replacement:
“We believe that the scope of the
Online Safety act is far broader and restrictive than is necessary in a free
society.
For instance, the definitions in
Part 2 covers online hobby forums, which we think do not have the resource to
comply with the act and so are shutting down instead.
We think that Parliament should
repeal the act and work towards producing proportionate legislation rather than
risking clamping down on civil society talking about trains, football, video
games or even hamsters because it can't deal with individual bad faith actors.”
Of course the Act contains much more than the safety duties:
in particular, new communications and other criminal offences, age assurance
requirements on non-user-to-user pornography websites in Part 5, and separate duties
around fraudulent paid-for advertising.
Communications offences
Most, if not all, the new criminal offences either improve
on what went before or are eminently justifiable. Is anyone suggesting, for
instance, that deliberate epilepsy trolling or cyberflashing should not be an
offence?
Possibly the only offence about which there could be some debate
is the S.179 false communications offence. It has, rightly or wrongly, become something
of a lightning rod for critics concerned about over-broadly criminalising
disinformation.
Whatever doubts may exist about its current formulation, the
S.179 offence is still an improvement on what went before. Its antecedents can
be traced back to 1935, when distressing hoax telegrams were a concern:
“there are a number of cases
where people … will send a telegram to a person saying that somebody is
seriously ill and the recipient is put to great anxiety, sometimes even to
expense, and then finds it is a bogus message. … That silly practice which causes
anxiety - in some cases a telegram has been sent declaring a person to be dead
when there was no foundation for the statement - ought to be stopped.” (Postmaster-General,
Hansard, 5 March 1935)
The 1935 offence covered telegrams and telephone calls. In 1969
it was broadened to include messages sent by means of a public
telecommunications service. In 2003 it was amended to cover communications sent
across a public electronic communications network. By that time internet
communications were undoubtedly in scope. Following a Law Commission recommendation, S.179 OSA replaced the 2003 offence.
The 1935 (and 1969 and 2003 updated) offence was framed in
terms of causing “annoyance, inconvenience, or needless anxiety”. S.179’s
“non-trivial physical or psychological harm”, whatever the criticism that it has
attracted as a definition, is narrower.
As to the application of S.179 to fake news, the Law
Commission said:
“It is important to note at the
outset that the proposals were not intended to combat what some consultees
described in their responses as “fake news”. While some instances of deliberate
sharing of false information may fall within the scope of the offences, our
view is that the criminal law is not the appropriate mechanism to regulate
false content online more broadly.” (Modernising Communications Offences, Final
Report, 20 July 2021, para 3.3)
When the new offences came into force in January 2024 the government
announced that the false information offence criminalised “sending fake news
that aims to cause non-trivial physical or psychological harm.”
Notwithstanding the improvement on its predecessors, S.179
shares some features in common with the harmful communications offence (also
recommended by the Law Commission) which was rightly dropped from the Bill. As
a result of that decision the long-criticised S.127(1) Communications Act 2003 and
S.1(a)(i) Malicious Communications Act 1988 “grossly offensive” offences remain
in force. A re-examination of how to replace those is overdue. In conjunction,
another look could be taken at the false information offence.
The core safety duties
Returning to the core safety duties, do they merit repeal? The underlying point of principle is that the Act’s safety duties rest
on defectively designed foundations: a flawed analogy with the duty of care owed to visitors by the occupier of real world premises. That, it might be said, requires exposing the root cause, not compilation of a snagging list.
Why is the analogy flawed? The Act’s safety duties are about
illegal (criminal) user content and activities, and user content harmful to
children. Occupiers’ liability is about risk of causing physical injury. Those
are categorically different. You don’t need 444 pages of Ofcom Illegal Content Judgements
Guidance to decide that a projecting nail in a floorboard poses a risk of
physical injury. Moreover the remedy – hammering down the nail — hurts no-one.
Content judgements about illegality, in particular, tend to be nuanced,
complex and will often depend on contextual information that is unavailable to
the platform. Hammering down nails at scale inevitably results in arbitrary
judgements and suppression of legitimate user content. That is magnified when
the Act contemplates automated content detection and removal in order to fulfil
a platform’s duties.
In short, speech is not a tripping hazard and it was folly
to design legislation as if it were.
Nor, it might be suggested, should anyone be thinking of adding
more layers to this already teetering wedding cake. It is a confection with which
no-one – supportive or critical – is currently very happy, albeit for widely
differing reasons. If vaguer and more subjective notions of harmful content are
piled onto the legislation, the structural cracks can only expand.
So, where may critics’ attentions be focused? The Open Rights Group, both itself and jointly with other civil society organisations, has produced detailed briefing papers ahead of the Parliamentary debate.
Here, using U2U services by
way of illustration, are some thoughts of my own on possible areas of interest.
S.10(2)(a) Proactive content filtering
Section 10(2)(a) imposes a duty on platforms to take or use
proportionate measures relating to the design or operation of the service to “prevent
individuals from encountering priority illegal content by means of the service”.
Priority illegal content is a list of over 140 criminal offences, plus their
inchoate versions: encouraging and assisting, conspiracy and so on.
The EU Digital Services Act adopts a diametrically opposed
stance, retaining the long-standing EU prohibition on imposing general
monitoring obligations on platforms.
A platform that complies with recommendations made in an
Ofcom Code of Practice is deemed to comply with the duty laid down in the Act.
Ofcom started, in its original Illegal Content Code of Practice, by recommending
CSAM perceptual hash-matching and URL-matching for some platforms.
It is now going further in its Additional Safety Measures consultation, proposing to apply perceptual hash-matching to terrorism and
intimate image abuse content, and proposing (among other things) ‘principles-based’
proactive technology measures for a broader range of illegal content. Unlike
the previous CSAM hash-matching recommendation, IIA perceptual hash-matching
could be carried out against an unverified database of hashes.
However, once extended beyond matching against a list of
verified illegal items, the assumption underpinning content detection and
upload filtering – that technology can make accurate content judgements – becomes questionable. Automated illegality judgements made in the
absence of full contextual information are inherently arbitrary. Measures that
result in too many false positives cannot be proportionate.
Ofcom’s proposed ‘principles-based’ measures are its most controversial recommendations. Ofcom has declined to specify what is an
acceptable level of false positives, leaving that judgement to service
providers. As Ofcom’s Director for Online Safety Strategy Delivery Mark Bunting
said in evidence to the Lords Communications and Digital Committee:
“We think it is right that firms
have to take responsibility for making the judgment themselves about whether
the tool is sufficiently accurate and effective for their use. That is their
judgment, and they should use it; we are expecting that to drive wider use than
the regulator just issuing directions.” (14 October 2025)
Ofcom itself acknowledges that its principles-based
proposals:
“could lead to significant
variation in impact on users’ freedom of expression between services”.
(Consultation, para 9.136)
That conflicts with the reasonable degree of
certainty for users necessary for compliance with the ECHR.
Section 10(2)(a) is likely to be high up the critics’ list: too
general, flawed in its underlying assumptions, and (as is now evident), a means
for Ofcom to propose principles-based proactive technology measures that are so
vague as to be non-compliant with basic rule of law and ECHR requirements.
S.192 Illegality judgements
A related issue, but relevant also to reactive content
moderation, is the standard that the Act sets for illegality judgements. S. 192
specifies how service providers should go about determining whether a given
item of content is illegal (or, for that matter, whether it is content of
another relevant kind such as content harmful to children). The provision was introduced into the Bill
by amendment, following the Independent Reviewer of Terrorism Legislation’s
‘Missing Pieces’ paper which rightly pointed out the Bill’s lack of clarity
about how service providers were to go about adjudging illegality, especially
the mental element of any offence.
Section 192, although it clarifies the illegality judgement
thresholds, bakes in removal of legal content: most obviously in the stipulated
threshold of “reasonable grounds to infer” illegality. It also requires the
platform to ignore the possibility of a defence unless it has positive grounds
to infer that a defence may succeed.
The S.192 threshold also sits uneasily with S.10(3)(b),
which requires a proportionate process for swift removal when a platform is
alerted by a person to the presence of any illegal content, or “becomes aware”
of it in any other way.
S.121 – Accredited technology notices
This is perhaps the most overtly controversial Ofcom power
in the Act. For CSAM, the S.121 power can – unlike the preventive content duty
under S.10(2) – require accredited proactive detection technology to be applied
to private communications. Critics have questioned whether the power could be
used to undermine, circumvent or render it impossible for a service provider to
use end to end encryption.
As with proactive technology measures, in its consultation
on minimum standards of accuracy of accredited technology for S.121 purposes Ofcom
has declined to specify quantitative limits on what might be an acceptable
level of accuracy of the technology.
Ofcom’s relationship with government.
The extent to which government is in a position to put pressure on
Ofcom, in principle an independent regulator, has come into clearer focus since
Royal Assent. The most notable example is the Secretary of State’s ‘deep
disappointment’ 12 November 2025 letter to Ofcom.
The Act also contains specific mechanisms giving the
Executive a degree of control, or at least influence, over Ofcom.
S.44 Powers of direction These powers enable the
Secretary of State to direct OFCOM to modify a draft code of practice on
various grounds, including national security, public safety, public health, or
relations with a foreign government. The original provisions were criticised
during the passage of the Bill and were narrowed as a result. So far they have
not been used.
S.92 and S.172 Strategic priorities S.172 enables the
Secretary of State to designate a statement setting out the government’s online
safety strategic priorities. If it does so, OFCOM must have regard to the
statement when carrying out its online safety functions. The government
has already used this power.
Whatever Parliament may have thought was meant by
‘strategic’ priorities, or ‘particular outcomes identified with a view to
achieving the strategic priorities’, the 30 page Statement designated by the Secretary of State on 2 July 2025 goes into what might be thought to be operational areas: for instance, interpreting
‘safe by design’ as contemplating the use of technology in content moderation.
S.175 Special circumstances directions The Secretary
of State has power to give a direction to Ofcom, in exercising its media
literacy functions, if the Secretary of State has reasonable grounds for
believing that there is a threat to the health or safety of the public, or to
national security. A direction could require Ofcom to give priority to
specified objectives, or require Ofcom to give notice to service providers
requiring them to make a public statement about steps they are taking in
response to the threat. The grounding of this power in Ofcom’s media literacy functions
renders its ambit somewhat opaque.
S.98 Ofcom’s risk register and sectoral risk profiles
S.98 mandates Ofcom to produce risk registers and risk
profiles for different kinds of services, grouped as Ofcom thinks fit according
to their characteristics and risk levels. ‘Characteristics’ includes functionalities,
user base, business model, governance and other systems and processes. ‘Risk’ means risk of physical or psychological
harm presented by illegal content or activity, or by content harmful to
children.
In preparing its work product Ofcom abandoned any notion
that functionality has to create or exacerbate a risk of illegal content or
offences. Instead, Ofcom’s risk register is based on correlation: evidence that
malefactors have made use of functionality available on platforms and search
engines.
That has led Ofcom to designate common or garden
functionality, such as the ability to use hyperlinks, as risk factors. That, it
might be thought, turns the right of freedom of expression on its head. We do
not treat the ability to use pen and paper, a typewriter, or a printing press
as an inherent risk.
The length, complexity and often impenetrability of Ofcom’s work
product is also noteworthy. The Illegal Harms Register of Risks runs to 480
pages, accompanied by 84 pages of Risk Assessment Guidance and Profiles.
Harm, illegality, or both?
The Act’s safety duties vary as to whether they are trying
to protect users from risk of encountering illegal content per se, risk
of suffering harm (physical or psychological) as a result of encountering illegal
content, or both. This may seem a rather technical point, but is symptomatic of
the Act’s deeper confusion about what it is trying to achieve.
Ofcom’s efforts to simplify the duties by referring to
‘illegal harms’ in some of their documents added to the confusion. Nor were matters helped by the Act’s designation, as priority offences, of some offences for which the likelihood of physical or
psychological harm would appear to be remote (consider money-laundering, for
instance).
There is also a curious mismatch between what the Act
requires for Ofcom’s Risk Registers and Risk Profiles, compared with risk
assessments carried out by service providers. Ofcom’s work products are
required only to consider risk of harm (physical or psychological) presented by
illegal content, whereas service provider risk assessments are also required to
consider illegality per se.
S.1 The purpose clause
Section 1 of the Act is unlikely to be in the sights of many
critics. But, innocuous as it may appear, it deserves to be.
The purpose clause ostensibly sets out the overall purposes
of the Act. It was the last minute product of a new-found spirit of cross-party
collaboration that infused the House of Lords in the final days of the Bill. In
reality, however, it illustrates the underlying lack of
clarity about what the Act is trying to achieve.
Purpose clauses are of debatable benefit at the best of
times: if they add nothing to the text of the Act, they are superfluous. If
they differ from the text of the Act, they are prone to increase the difficulty
of interpretation. This section uses terminology that appears nowhere else in
the Act, and is caveated with ‘among other things’ and ‘in broad terms’.
Possibly the low point of Section 1 is the reference to the
need for services to be ‘safe by design’. Neither ‘safe’, nor ‘safe by design’
are defined in the Act. They are susceptible of any number of interpretations.
One school of thought regards safety by design as being
about giving thought at the design stage to safety features that are preferably
content-agnostic and not focused on content moderation. The government, in its
Statement of Strategic Priorities, takes a different view: safety by design is
about preventing harm from occurring in the first place. That includes
deploying technology to improve the scale and effectiveness of content
moderation.
That interpretation readily translates into proactive content
detection and filtering technology (see the discussion of section
10(2)(a) above). Indeed Ofcom, in its response to the government’s Statement of
Strategic Priorities section on safety by design, refers to its own consultation on
proactive technologies.
Fundamental rethink?
There are more problems with the Act: vague core definitions that even Ofcom will not give a view on, over-reaching territoriality, the
inclusion of small, low risk volunteer-led forums, questions around age assurance and
age-gating, concerns that the definitions of content legal but harmful
to children are imprecise and may deny children access to beneficial content,
and others.
The most radical option would be to rethink the broadcast-style ‘regulation
by regulator’ model altogether. This observer has always viewed the adoption of that model as a fundamental error. Nothing that has occurred since has changed
that view. If anything, it has been reinforced. Delay, expense and an inevitably bureaucratic approach were hard-wired into the legislation. The opportunity cost
of the years and resources spent heading down that rabbit hole has to be immense.
The results are now attracting criticism from all sides:
supporters, opponents and government alike. The mystery is why everyone concerned could not see what was designed in to the
legislation from the start. If you put your faith in a discretionary regulator
rather than in clear legal rules, prepare for disappointment when the regulator
does not do what you fondly imagined that it would. If you wanted the regulator
to be bold and ambitious, be prepared for the project to end up in the courts
when the regulator overreaches.
Finally, the Online Safety Act project has been bedevilled throughout
by a tendency to equate all platforms with large, algorithmically driven, social
media companies. Even now, a Lords amendment recently tabled to the Children’s
Wellbeing and Schools Bill, claiming to be about “introducing regulations to
prevent under 16s from accessing social media”, is drafted so as to apply to
all regulated user-to-user services as defined in the Online Safety Act – a vastly
wider cohort of services.
That takes us back to the flawed analogy with occupier’s
liability. Possibly the analogy was conceived with large social media companies
in mind. But then, if the projecting nail in the floorboard is actually a
social media company’s engagement algorithm, not the user’s speech itself, that
would suggest legislation based on a completely different foundation:
one that focuses on features and functionalities that create or exacerbate a
risk of specific, tightly defined, objectively ascertainable kinds of injury.
Put another way, if you want to legislate about safety, make it about safety properly so called; if you want to legislate about Big Tech and the Evil Algorithm, make it about that; if you want to legislate about children, make it about that.
What alternative approaches might there be? I don’t pretend to have complete answers, but suggested some in my response to the Online Harms White Paper back in 2019; and again in this post, written during the 2022 hiatus while the Conservatives sorted out their leadership crisis.
