“The brutal truth is that nothing is likely to trip up the Online Safety Bill.” So began a blogpost on which I was working just over a month ago. Fortunately, it was still unfinished when Boris Johnson imploded for the final time, the Conservative leadership election was triggered, and candidates – led by Kemi Badenoch - started to voice doubts about the freedom of speech implications of the Bill. Then the Bill’s Commons Report stage was put on hold until the autumn, to allow the new Prime Minister to consider how to proceed.
The resulting temporary vacuum has sucked in commentary from all sides, whether redoubled criticisms of the Bill, renewed pursuit of existing agendas, or alarm at the prospect of further delays to the legislation.
Delay, it should be acknowledged, was always hardwired into the Bill. The Bill’s regulatory regime, even at a weighty 218 pages, is a bare skeleton. It will have to be fleshed out by a sequence of secondary legislation1, Ofcom codes of practice2, Ofcom guidance3, and designated categories of service providers - each with its own step by step procedures. That kind of long drawn-out process was inevitable once the decision was taken to set up a broadcast-style regime under the auspices of a discretionary regulator such as Ofcom.
In July 2022 Ofcom published an implementation road-map that would result in the earliest aspect of the regulatory regime (illegality safety duties) going live in mid-2024. We have to wonder whether that would have proved to be optimistic even without the current leadership hiccup and – presumably - a period of reflection before the Bill can proceed further.
The Bill has the feel of a social architect’s dream house: an elaborately designed, exquisitely detailed (eventually), expensively constructed but ultimately uninhabitable showpiece; a showpiece, moreover, erected on an empty foundation: the notion that a legal duty of care can sensibly be extended beyond risk of physical injury to subjectively perceived speech harms.
As such, it would not be surprising if, as the Bill proceeded, implementation were to recede ever more tantalisingly out of reach. As the absence of foundations becomes increasingly exposed, the Bill may be in danger not just of delay but of collapsing into the hollow pit beneath, leaving behind a smoking heap of internal contradictions and unsustainable offline analogies.
If, under a new Prime Minister, the government were to reimagine the Online Safety Bill, how might they do it? Especially, how might they achieve a quick win: a regime that could be put into effect immediately, rather than the best part of two years later - if ever?
The most vulnerable part of the Bill is probably the ‘legal but harmful to adults’ provisions4. However, controversial as they undoubtedly are, those are far from the most problematic features of the Bill.
Here are some other aspects that might be under the spotlight.
The new communications offences
The least controversial part of the Bill ought to be the new Part 10 criminal offences. Those could, presumably, come into force shortly after Royal Assent. However, some of them badly need fixing.
The new communications offences5 have been designed to
replace the Malicious Communications Act 1988 and the notorious S.127 Communications Act 2003. They have the authority of the Law Commission behind
them.
Unfortunately, the new offences are a mess. The harmful communications offence6, in particular, will plausibly create
a veto for those most readily distressed by encountering views that they regard
as deeply repugnant, even if that reaction is unreasonable. That prospect, and
the consequent risk of legal online speech being chilled or removed, is exacerbated
when the offence is combined with the illegality duty that the Bill, in its
present form, would impose on all U2U platforms and search engines.
Part 10 of the Bill also has the air of unfinished business,
with calls for further new offences such as deliberately sending flashing
images to epileptics.
Make it about safety?
The 2017 Green Paper that started all this was entitled Internet Safety Strategy. Come the April 2019 White Paper, that had metamorphosed
into Online Harms. Some have criticised the Bill’s reversion to Online Safety, although in truth the change is more label than substance. It does, however, prompt the question whether a desire for some quick wins would be served by focusing the Bill, in substance as well as in name, on safety in its core sense.
That is where much of the original impetus for the Bill stemmed from. Suicide, grooming, child abuse, physically dangerous ‘challenges’, violence – these are the stuff of safety-related duties of care. It is well within existing duty of care parameters to consider whether a platform has done something that creates or exacerbates a risk of physical injury as between users; then, whether a duty of care should be imposed; and if so, a duty to take what kind of steps (preventative or reactive) and in what circumstances. Some kinds of preventative duty, however, involve the imposition of general monitoring obligations, which are controversial.
A Bill focused on safety in its core sense – risk of physical injury - might usefully clarify and codify, in the body of the Bill, the contents of such a duty of care and the circumstances in which it would arise. A distinction might, for example, be drawn between positively promoting an item of user content, compared with simply providing a forum akin to a traditional message board or threading a conversation.
Duties of care are feasible for risk of physical injury, because physical injury is objectively identifiable. Physical injuries may differ in degree, but a bruise and a broken wrist are the same kind of thing. We also have an understanding of what gives rise to risk of physical injury, be it an unguarded lathe or a loose floorboard.
The same is not true for amorphous conceptions of harm that depend on the subjective perception of the person who encounters the speech in question. Speech is not a tripping hazard. Broader harm-based duties of care do not work in the same way, if at all, for controversial opinions, hate, blasphemy, bad language, insults, and all the myriad kinds of speech that to a greater or lesser extent excite condemnation, inflame emotions, or provoke anger, distress and assertions of risk of suffering psychological harm.
A subjective harm-based duty of care requires the platform to assess and weigh those considerations against the freedom of speech not only of the poster, but of all other users who may react differently to the same speech, then decide which should prevail. That is a fundamentally different exercise from the assessment of risk of physical injury that underpins a safety-related duty of care. An approach that assumes that risk of subjectively perceived speech harms can be approached in the same way as risk of objectively identifiable physical injury will inevitably end up floundering in the kind of morass in which the Bill now finds itself.
The difference from risk of physical injury was, perhaps unwittingly, illustrated in the context of the illegality duty by the then Digital Minister Chris Philp in the Bill’s Commons Committee stage. He was discussing the task that platforms would perform in deciding whether user content was illegal under the new ‘harmful communications’ offence (above). The platform would, he said, perform a balancing exercise in assessing whether the content was a contribution to a matter of public interest. No balancing exercise is necessary to determine whether a broken wrist is or is not a physical injury.
Again within the illegality duty, the new foreign interference offence under Clause 13 of the National Security Bill would be designated as a priority offence under the Online Safety Bill. That would require platforms to adjudge, among other things, risk of “spiritual injury”.
The principled way to address speech considered to be beyond the pale is for Parliament to make clear, certain, objective rules about it – whether that be a criminal offence, civil liability on the user, or a self-standing rule that a platform is required to apply. Drawing a clear line, however, requires Parliament to give careful consideration not only to what should be caught by the rule, but to what kind of speech should not be caught, even if it may not be fit for a vicar’s tea party. Otherwise it draws no line, is not a rule and fails the rule of law test: that legislation should be drawn so as to enable anyone to foresee, with reasonable certainty, the consequences of their proposed action.
Rethink the illegality duty?7
Requiring platforms to remove illegal user content sounds
simple, but isn’t. During the now paused Commons Report Stage debate on the
Bill, Sir Jeremy Wright QC (who ironically, was the Secretary of State for
Culture at the time when the White Paper was launched in April 2019), observed:
“When people first look at this
Bill, they will assume that everyone knows what illegal content is and
therefore it should be easy to identify and take it down, or take the
appropriate action to avoid its promotion. …
… criminal offences very often
are not committed just by the fact of a piece of content; they may also require
an intent, or a particular mental state, and they may require that the
individual accused of that offence does not have a proper defence to it.
The question of course is how on
earth a platform is supposed to know either of those two things in each case.”
He might also have added that the relevant factual material
for any given offence will often include information that is outside anything
of which the platform can have knowledge, especially for real-time automated
filtering systems8.
In any event, it is pertinent to ask how many offences exist for which illegality can be determined with confidence simply by looking at the content itself and nothing else? Illegality often requires assessment of intention (sometimes, but not always, intention can be inferred from the content), purpose, or of extrinsic factual information. The Bill now contains an illuminating, but ultimately unsatisfactory,
attempt (New Clause 14) to address these issues.
The underlying problem with applying the duty of care concept to illegality is that illegality is a complex legal construct, not an objectively ascertainable fact like physical injury. Adjudging its existence (or risk of such) requires both factual information (often contextual) and interpretation of the law. There is a high risk that legal content will be removed, especially for real time filtering at scale. For this reason, it is strongly arguable that human rights compliance requires a high threshold to be set for content to be assessed as illegal.
Given the increasingly (if belatedly) apparent problems with the illegality duty, what options might a government coming to it with a fresh eye consider? The current solution, as with so many problematic aspects of the Bill, is to hand it off to Ofcom. New Clause 15 would require Ofcom to produce guidance on how platforms should go about adjudging illegality in accordance with NC 14.
Assuming that the illegality duty were not dropped altogether, other possibilities might include:
- Restrict any duty to offences where the existence of
an offence (including any potentially available defences) is realistically
capable of being adjudged on the face of the content itself with no further information9.
- For in-scope offences, raise the illegality
determination threshold from reasonable grounds to infer to manifest illegality10.
Steps of this kind might in any event be necessary to achieve ECHR compliance. They would also reflect broader traditions of protection of freedom of speech, such as the presumption against prior restraint.
Illegality across the UK11
Another consequence of illegality being a legal construct is
that criminal offences vary across the UK. The Bill requires a platform, when preventing or removing user content under its illegality duty, to treat a criminal offence in one part of the UK as if it applied to the whole of the UK. This has the bizarre consequence that platforms will, for instance, have to apply in parallel both the new harmful communications offence contained in the Bill and its repealed predecessor, S.127 of the Communications Act 2003.
Why is that? Because S.127 will be repealed only for England and Wales and will remain in force in Scotland and Northern Ireland. Platforms would have to treat S.127 as if it still applied in England and Wales; and, conversely, the new England and Wales harmful communications offence as if it applied in Scotland and Northern Ireland.
In Committee on 14 June 2022 the then Minister confirmed that:
“…the effect of the clauses is a
levelling up—if I may put it that way. Any of the offences listed effectively
get applied to the UK internet, so if there is a stronger offence in any one
part of the United Kingdom, that will become applicable more generally via the
Bill.”
Of course the alternative of requiring platforms to undertake the (in reality impossible) task of deciding which UK law – England and Wales, Northern Ireland or Scotland - applied to which post or tweet, would hardly be less problematic.
At Report stage the government added an amendment to the Bill12 which would, for the future, mean that the non-priority illegality duty would apply only to offences enacted by the UK Parliament, or by devolved administrations with the consent of the Westminster government. If nothing else, that shines a brighter spotlight on the problem.
The role of Ofcom
The most radical option, were the government looking for a legislative quick win that cuts out delay, would be to jettison Ofcom and its companion two year procession of secondary legislation, guidance and codes of practice.
In truth, as a matter of principle it was always a bad idea to apply discretionary broadcast-style regulation to individual speech. The way
to govern individual speech is with clear, certain laws of general application.
A government so minded might consider aligning practicality with principle.
If Ofcom’s role were to be scrapped, what could replace it? One alternative might be to take existing common law duties of care relating to risk of physical injury as the starting point, clarify and codify them on the face of the Bill (not by secondary legislation), and provide for enforcement otherwise than through a regulator.
That would require the scope and content of any duties under the Bill to be articulated, Goldilocks-style, to a reasonable level of clarity: not so abstract as to be vague, not so technology and business model-specific as to be unworkable, but just right. Granted, that is easier said than done; but still perhaps more achievable than attempting to launch the overladen supertanker that is now marooned in its passage through Parliament.
This approach would, however, raise questions about how some kinds of duty could be made compatible with the hosting protections originating in the EU ECommerce Directive, to which the government remains committed (unlike the prohibition on general monitoring obligations, from which the government has distanced itself).
There would then be the question of Ofcom’s proposed powers to issue notices requiring providers to use approved scanning and filtering technology13. Those powers are at best controversial, raising a plethora of issues of their own. If such powers were to be continued in some form, they could be a candidate for separate legislation, so that issues such as impact on privacy and end-to-end encryption could be brought out into the open and given the full debate that they deserve.
Other parts of the Bill
There is much more to the Bill than the parts discussed so far: fraudulent advertising14 and age-verification of pornographic websites15 have Parts of the Bill to themselves. There are numerous children-related provisions16, which to an extent overlap with the ICO Code of Practice on age-appropriate design and the currently mothballed Digital Economy Act 2017. Other aspects of the Bill include press exemptions promised at the time of the White Paper17(which always looked likely to be undeliverable and are still the subject of heavy debate); and the provisions constraining how Category 1 platforms can treat journalism and content of democratic importance18.
These are all crafted on the basis of a regulatory regime operated by Ofcom. It would not be a simple matter to disentangle them from Ofcom, should the government contemplate a non-Ofcom fast track.
Online ASBIs
A refocused Bill could face the objection that it
does not address some of the most unpleasant, yet currently legal, user
behaviour that can be found online. That does not rule out the possibility of legislation
that on its face (not consigned to secondary legislation) draws clear lines
that may differ from those that apply today. But if Parliament is unwilling or unable to draw
clear lines to govern behaviour regarded as beyond the pale, what other possibilities exist?
One answer, albeit itself somewhat controversial, is already
sitting on the legislative shelf, but (as far as can be seen) appears in the
online context to be gathering dust.
The Anti-Social Behaviour, Crime and Policing Act 2014
contains a procedure for some authorities to obtain a civil anti-social behaviour
injunction (ASBI, the successor to ASBOs) against someone who has engaged or
threatens to engage in anti-social behaviour, meaning “conduct that has caused,
or is likely to cause, harassment, alarm or distress to any person”. That
succinctly describes online disturbers of the peace, albeit in very broad terms.
It maps readily on to the most egregious abuse, cyberbullying, harassment and
the like.
The Home Office Statutory Guidance on the use of the 2014
Act powers (revised in December 2017, August 2019 and June 2022) makes no
mention of their use in relation to online behaviour. Yet nothing in the
legislation restricts an ASBI to offline activities. Indeed, over 10 years ago
The Daily Telegraph reported an 'internet ASBO' made under predecessor legislation
against a 17 year old who had been posting material on the social media
platform Bebo. The order banned him from publishing material that was
threatening or abusive and promoted criminal activity.
ASBIs raise difficult questions of how they should be framed
and of proportionality. Some may have concerns about the broad terms in which
anti-social behaviour is defined in the legislation. Nevertheless, the courts
to which applications are made should, at least in principle, have the societal
and institutional legitimacy, as well as the experience and capability, to
weigh such factors.
That said, the July 2020 Civil Justice Council Report
“Anti-Social Behaviour and the Civil Courts” paints a somewhat dispiriting picture of the use of ASBIs offline. It highlights a practice of applying for orders
ex parte – something that would be especially troubling for an ASBI that
would affect the defendant’s freedom of expression. Concerns of that kind would
have to be carefully addressed if online ASBIs were to be picked up and dusted
off.
On the positive side, the usefulness of online ASBIs could
be transformed if the government were to explore the possibility of extending beyond the official authorities the ability to apply to court for an online ASBI , for instance to selected voluntary organisations.
Finally, for a longer-term view of access to justice
online, Section 9 of my submission to the Online Harms White Paper
consultation has some blue-sky thoughts.
Footnotes (references are to the Bill sections
as at the Commons Report stage, and to amendments and new clauses (NC) adopted
during Report Stage on 22 July 2022 but not yet published as an amended Bill.)
1 Required secondary legislation S.53: priority
content harmful to children, primary priority content harmful to children; S.54:
priority content harmful to adults; NB S.55: regulation-making; S.56: Ofcom
review; S.60: reports to National Crime Authority; S.81/Sch 11: service
provider categorisation; S.98: overseas regulators; SS.141 and 142: super-complaints.
2 Required Ofcom codes of practice S.37/Sch 4: Terrorism content, CSEA content, other
duties (illegal content, children’s online safety, adults’ online safety, user
empowerment, content of democratic importance, journalistic content, content
reporting, complaints procedures); fraudulent advertising (Cat 1 and 2A
providers). Code of practice measures
must be compatible with pursuit of specified online safety objectives (Sch 4
para 3, or as amended by regulations). Draft codes of practice are subject to
modification under Secretary of State’s power of direction (S.40(1)).
3 Required Ofcom guidance S.48 (as amended at Report stage): service
provider record-keeping, review and children’s risk assessments, service
provider protection of news publisher content; S.58 (Cat 1 services): offer to users
of identity verification; S.65 (Cat 1, 2A and 2B services): transparency
reports; S.69 (regulated pornographic content providers): compliance with
duties; S.85: illegal content risk assessments, children’s risk assessments, Cat
1 service adults’ risk assessments; S.130: enforcement action; S.143: super-complaints;
NC15: service provider illegality judgements. Also note S.84: Required Ofcom
risk assessment, risks register and risk profiles for illegal content,
content harmful to children, content harmful to adults.
4 Legal but harmful to adults (Cat 1 services) S.12:
adults risk assessment; S.13: transparency and other duties; S.14: user
empowerment; S.17(5): user content reporting; S.18(6): complaints procedures;
S.54: meanings of content harmful to adults and priority content harmful to
adults; S.55: regulations designating priority content harmful to adults; S.56:
Ofcom review; S.64/Sch 8 (Cat 1, 2A and 2B services): transparency reports;
S.81/Sch 11 (service provider categorisation); S.84: Ofcom risk assessment,
risks register and risk profiles; S.187: definition of ‘harm’ as physical or
psychological harm.
5 New communications offences S.151 (harmful
communications); S.152 (false communications); S.153 (threatening communications);
S.154 (interpretation); S.155 (extraterritorial reach); S.156 Liability of
corporate officers. S.151 and 152 make use of problematic ‘likely audience’
tests. S.153 ought to be uncontroversial but has adopted wider language than
the Law Commission’s recommendation, resulting in possible overreach (discussed
here).
6 Harmful communications offence S.151.
7 Illegality duty S.9 (U2U services), S.24
(search services).
8 Real-time automated filtering systems S.9(3)(a)
and (b); cf also S.9(2); S.24(3)(a); cf also S.24(2); S.104:
accredited technology (terrorism content, CSEA content); S.117: proactive
technology requirement (illegal content, children’s online safety, fraudulent
advertising).
9 Capability of adjudging illegality on the face of
content alone This would involve review of at least the priority offences
designated in Sch 7.
10 Manifest illegality This would involve
reconsideration of NC 14. There is uncertainty in the Bill about whether, and
if so how far, a provider would be expected to go looking for information in
order to determine whether there were reasonable grounds to infer an offence (para
10 of the Government
Fact Sheet suggests that this would be left to Ofcom guidance). This seems most
likely to be relevant to the reactive duties specified in S.9(3)(c) and
S.24(3)(b) rather than to real time automated monitoring and filtering (S.9(3)(a)
and (b); cf also S.9(2); S.24(3)(a)).
11 Illegality applied across the UK S.52(9) and (12).
12 Future devolved offences Amendment 94.
13 Scanning and filtering technology powers S.104: use of accredited technology (terrorism
content, CSEA content); S.117: inclusion of proactive technology requirements
in Ofcom confirmation decisions (illegal content, children’s online safety,
fraudulent advertising).
14 Fraudulent advertising (Cat 1 and Cat 2A
services) SS. 34 and 35.
15 Pornographic site age verification SS.66 to 68.
16 Children-related provisions For service
providers the main duties for content harmful to children are set out in S.31: children’s
access assessments; SS.10 and 25: children’s risk assessments; SS.11 and 26: children’s
safety duties; S.53: primary priority, priority and non-designated content
harmful to children; S.187: definition of ‘harm’ as physical or psychological
harm.
17 News publisher content exemptions S.49(2)(g) and S.51(2)(b): exclusion from
scope of safety duties; NC19 (Cat 1 services): duties to protect news publisher
content. S.49 (2)(e) comments and reviews on provider content. Recognised news
publishers are also among those exempted from two of the three new
communications offences: S.151(6)(a) (harmful communications); S.152(4)(a) (false
communications).
18 Journalism and
content of democratic importance (Cat 1 services) S.16: journalistic
content; S.15: content of democratic importance.
[Typo (2009) corrected to 2019, 19 Aug 2022. Footnotes added 9 Oct 2022.]