Tuesday 13 December 2022

(Some of) what is legal offline is illegal online

From what feels like time immemorial the UK government has paraded its proposed online harms legislation under the banner of ‘What is Illegal Offline is Illegal Online’. As a description of what is now the Online Safety Bill, the slogan is ill-fitting. The Bill contains nothing that extends to online behaviour a criminal offence that was previously limited to offline. 

That is for the simple reason that almost no such offences exist. An exception that proves the rule is the law requiring imprints only on physical election literature, a gap that has been plugged not by the Online Safety Bill but by the Elections Act 2022.  

If the slogan is intended to mean that since what is illegal offline is illegal online, equivalent mechanisms should be put in place to combat online illegality, that does not compute either. As we shall see, the Bill's approach differs significantly from offline procedures for determining the illegality of individual speech - not just in form and process, but in the substantive standards to be applied.  

Perhaps in implicit recognition of these inconvenient truths, the government’s favoured slogan has undergone many transformations:

- “We will be consistent in our approach to regulation of online and offline media.” (Conservative Party Manifesto, 18 May 2017)

- “What is unacceptable offline should be unacceptable online.” (Internet Safety Strategy Green Paper, October 2017)

- “Behaviour that is illegal offline should be treated the same when it’s committed online.” (Then Digital Minister Margot James, 1 November 2018)

- “A world in which harms offline are controlled but the same harms online aren’t is not sustainable now…” (Then Culture Secretary Jeremy Wright QC, 21 February 2019

- “For illegal harms, it is also important to ensure that the criminal law applies online in the same way as it applies offline” (Online Harms White Paper, April 2019)

- "Of course... what is illegal offline is illegal online, so we have existing laws to deal with it." (Home Office Lords Minister Baroness Williams, 13 May 2020)

- “If it’s unacceptable offline then it’s unacceptable online” (DCMS, tweet 15 December 2020)

- "If it is illegal offline, it is illegal online.” (Then Culture Secretary Oliver Dowden, House of Commons 15 December 2020)

- “The most important provision of [our coming online harms legislation] is to make what's illegal on the street, illegal online” (Then Culture Secretary Oliver Dowden, 29 March 2021)

- “What's illegal offline should be regulated online.” (Damian Collins, then Chair of the Joint Pre-Legislative Scrutiny Committee, 14 December 2021)

- “The laws we have established to protect people in the offline world, need to apply online as well.” (Then former DCMS Minister Damian Collins MP, 2 Dec 2022

Now, extolling its newly revised Bill, the government has reverted to simplicity. DCMS’s social media infographics once more proclaim that ‘What is illegal offline is illegal online’.

The underlying message of the slogan is that the Bill brings online and offline legality into alignment. Would that also mean that what is legal offline is (or should be) legal online?  The newest Culture Secretary Michelle Donelan appeared to endorse that when announcing the abandonment of ‘legal but harmful to adults’: "However admirable the goal, I do not believe that it is morally right to censor speech online that is legal to say in person." 

Commendable sentiments, but does the Bill live up to them? Or does it go further and make illegal online some of what is legal offline? I suggest that in several respects it does do that.

Section 127 – the online-only criminal offence

First, consider illegality in its most commonly understood sense: criminal offences.

The latest version of the Bill scraps the previously proposed new harmful communications offence, reinstating S.127(1) of the Communications Act 2003 which it would have replaced. The harmful communications offence, for all its grievous shortcomings, made no distinction between offline and online. S.127(1), however, is online only. Moreover, it is more restrictive than any offline equivalent.

S.127(1) is the offence that, notoriously, makes it an offence to send by means of a public electronic communications network a “message or other matter that is grossly offensive or of an indecent, obscene or menacing character”. It is difficult to be sure of its precise scope – indeed one of the main objections to it is the vagueness inherent in ‘grossly offensive’. But it has no direct offline counterpart. 

The closest equivalent is the Malicious Communications Act 1988, also now to be reprieved. The MCA applies to both offline and online communications. Whilst like S.127(1) it contains the ‘grossly offensive” formulation, it is narrower by virtue of a purpose condition that is absent in S.127(1). Also the MCA offence appears not to apply to generally available, non-targeted postings on an online platform (Law Commission Scoping Report 2018, paras 4.26 to 4.29). That leaves S.127(1) not only broader in substance, but catching many kinds of online communication to which the MCA does not apply at all.

Para 4.63 of the Law Commission Scoping Report noted: “Indeed, as subsequent Chapters will illustrate, section 127 of the CA 2003 criminalises many forms of speech that would not be an offence in the “offline” world, even if spoken with the intention described in section 127.”

For S.127(1) that situation will be continued - at least while the government gives further consideration to the criminal law on harmful communications. But although the new harmful communications offence was rightly condemned, was the government really faced with having to make a binary choice between frying pan and fire?

Online liability to have content filtered or removed

Second, we have illegality in terms of ‘having my content compulsorily removed’.

This is not illegality in the normal sense of liability to be prosecuted and found guilty of a criminal offence. Nor is it illegality in the sense of being sued and found liable in the civil courts. It is more akin to an author having their book seized with no further sanction. We lawyers may debate whether this is illegality properly so called. To the user whose online post is filtered or removed it will certainly feel like it, even though no court has declared the content illegal or ordered its seizure.

The Bill creates this kind of illegality (if it be such) in a novel way: an online post would be filtered or removed by a platform because it is required to do so by virtue of a preventative or reactive duty of care articulated in the Bill. This creature of statute has - for speech - no offline equivalent. See discussion here and here

The online-offline asymmetry does not stop there. If we dig more deeply into a comparison with criminal offences we find other ways in which the Bill’s illegality duty treats online content more restrictively than offline. 

Two features stand out, both stemming from the Bill's recently inserted clause setting out how online platforms should adjudge the illegality of users' content.

The online illegality inference engine

First, in contrast to the criminal standard of proof – beyond reasonable doubt – the platform is required to find illegality if it has ‘reasonable grounds to infer’ that the elements of the offence are present.  That applies both to factual elements and to any required purpose, intention or other mental element.

The acts potentially constituting an offence may be cast widely, in which event the most important issues are likely to be intent and whether the user has an available defence (such as, in some cases, reasonable excuse). 

Under the Bill, unless the platform has information on the basis of which it can infer that a defence may successfully be relied on, the possibility of a defence is to be left out of consideration.  That leads into the second feature.

The online information vacuum

The Bill requires platforms to determine illegality on the basis of information reasonably available to them. But how much (or little) information is that likely to be?  

Platforms will be required to make decisions on illegality in a comparative knowledge vacuum. The paucity of information is most apparent in the case of proactive, automated real time filtering. A system can work only on user content that it has processed, which inevitably omits extrinsic contextual information. 

For many offences, especially those in which defences such as reasonable excuse bear the main legality burden, such absent contextual information would otherwise be likely to form an important, even decisive, part of determining whether an offence has been committed. 

For both of these reasons the Bill’s approach to online would inevitably lead to compulsory filtering and removal of legal online content at scale, in a way that has no counterpart offline. It is difficult to see how a requirement on platforms to have regard (or particular regard, as a proposed government amendment would have it) to the importance of protecting users’ right to freedom of expression within the law could act as an effective antidote to express terms of the legislation that spell out how platforms should adjudge illegality.

Online prior restraint

These two features exist against the background that the illegality duty is a form of prior restraint: the Bill requires content filtering and removal decisions to be made before any fully informed, fully argued decision on the merits takes place (if it ever would). A presumption against prior restraint has long formed part of the English common law and of human rights law. For online, no longer.