Tuesday, 13 December 2022

(Some of) what is legal offline is illegal online

From what feels like time immemorial the UK government has paraded its proposed online harms legislation under the banner of ‘What is Illegal Offline is Illegal Online’. As a description of what is now the Online Safety Bill, the slogan is ill-fitting. The Bill contains nothing that extends to online behaviour a criminal offence that was previously limited to offline. 

That is for the simple reason that almost no such offences exist. An exception that proves the rule is the law requiring imprints only on physical election literature, a gap that has been plugged not by the Online Safety Bill but by the Elections Act 2022.  

If the slogan is intended to mean that since what is illegal offline is illegal online, equivalent mechanisms should be put in place to combat online illegality, that does not compute either. As we shall see, the Bill's approach differs significantly from offline procedures for determining the illegality of individual speech - not just in form and process, but in the substantive standards to be applied.  

Perhaps in implicit recognition of these inconvenient truths, the government’s favoured slogan has undergone many transformations:

- “We will be consistent in our approach to regulation of online and offline media.” (Conservative Party Manifesto, 18 May 2017)

- “What is unacceptable offline should be unacceptable online.” (Internet Safety Strategy Green Paper, October 2017)

- “Behaviour that is illegal offline should be treated the same when it’s committed online.” (Then Digital Minister Margot James, 1 November 2018)

- “A world in which harms offline are controlled but the same harms online aren’t is not sustainable now…” (Then Culture Secretary Jeremy Wright QC, 21 February 2019

- “For illegal harms, it is also important to ensure that the criminal law applies online in the same way as it applies offline” (Online Harms White Paper, April 2019)

- "Of course... what is illegal offline is illegal online, so we have existing laws to deal with it." (Home Office Lords Minister Baroness Williams, 13 May 2020)

- “If it’s unacceptable offline then it’s unacceptable online” (DCMS, tweet 15 December 2020)

- "If it is illegal offline, it is illegal online.” (Then Culture Secretary Oliver Dowden, House of Commons 15 December 2020)

- “The most important provision of [our coming online harms legislation] is to make what's illegal on the street, illegal online” (Then Culture Secretary Oliver Dowden, 29 March 2021)

- “What's illegal offline should be regulated online.” (Damian Collins, then Chair of the Joint Pre-Legislative Scrutiny Committee, 14 December 2021)

- “The laws we have established to protect people in the offline world, need to apply online as well.” (Then former DCMS Minister Damian Collins MP, 2 Dec 2022

Now, extolling its newly revised Bill, the government has reverted to simplicity. DCMS’s social media infographics once more proclaim that ‘What is illegal offline is illegal online’.

The underlying message of the slogan is that the Bill brings online and offline legality into alignment. Would that also mean that what is legal offline is (or should be) legal online?  The newest Culture Secretary Michelle Donelan appeared to endorse that when announcing the abandonment of ‘legal but harmful to adults’: "However admirable the goal, I do not believe that it is morally right to censor speech online that is legal to say in person." 

Commendable sentiments, but does the Bill live up to them? Or does it go further and make illegal online some of what is legal offline? I suggest that in several respects it does do that.

Section 127 – the online-only criminal offence

First, consider illegality in its most commonly understood sense: criminal offences.

The latest version of the Bill scraps the previously proposed new harmful communications offence, reinstating S.127(1) of the Communications Act 2003 which it would have replaced. The harmful communications offence, for all its grievous shortcomings, made no distinction between offline and online. S.127(1), however, is online only. Moreover, it is more restrictive than any offline equivalent.

S.127(1) is the offence that, notoriously, makes it an offence to send by means of a public electronic communications network a “message or other matter that is grossly offensive or of an indecent, obscene or menacing character”. It is difficult to be sure of its precise scope – indeed one of the main objections to it is the vagueness inherent in ‘grossly offensive’. But it has no direct offline counterpart. 

The closest equivalent is the Malicious Communications Act 1988, also now to be reprieved. The MCA applies to both offline and online communications. Whilst like S.127(1) it contains the ‘grossly offensive” formulation, it is narrower by virtue of a purpose condition that is absent in S.127(1). Also the MCA offence appears not to apply to generally available, non-targeted postings on an online platform (Law Commission Scoping Report 2018, paras 4.26 to 4.29). That leaves S.127(1) not only broader in substance, but catching many kinds of online communication to which the MCA does not apply at all.

Para 4.63 of the Law Commission Scoping Report noted: “Indeed, as subsequent Chapters will illustrate, section 127 of the CA 2003 criminalises many forms of speech that would not be an offence in the “offline” world, even if spoken with the intention described in section 127.”

For S.127(1) that situation will be continued - at least while the government gives further consideration to the criminal law on harmful communications. But although the new harmful communications offence was rightly condemned, was the government really faced with having to make a binary choice between frying pan and fire?

Online liability to have content filtered or removed

Second, we have illegality in terms of ‘having my content compulsorily removed’.

This is not illegality in the normal sense of liability to be prosecuted and found guilty of a criminal offence. Nor is it illegality in the sense of being sued and found liable in the civil courts. It is more akin to an author having their book seized with no further sanction. We lawyers may debate whether this is illegality properly so called. To the user whose online post is filtered or removed it will certainly feel like it, even though no court has declared the content illegal or ordered its seizure.

The Bill creates this kind of illegality (if it be such) in a novel way: an online post would be filtered or removed by a platform because it is required to do so by virtue of a preventative or reactive duty of care articulated in the Bill. This creature of statute has - for speech - no offline equivalent. See discussion here and here

The online-offline asymmetry does not stop there. If we dig more deeply into a comparison with criminal offences we find other ways in which the Bill’s illegality duty treats online content more restrictively than offline. 

Two features stand out, both stemming from the Bill's recently inserted clause setting out how online platforms should adjudge the illegality of users' content.

The online illegality inference engine

First, in contrast to the criminal standard of proof – beyond reasonable doubt – the platform is required to find illegality if it has ‘reasonable grounds to infer’ that the elements of the offence are present.  That applies both to factual elements and to any required purpose, intention or other mental element.

The acts potentially constituting an offence may be cast widely, in which event the most important issues are likely to be intent and whether the user has an available defence (such as, in some cases, reasonable excuse). 

Under the Bill, unless the platform has information on the basis of which it can infer that a defence may successfully be relied on, the possibility of a defence is to be left out of consideration.  That leads into the second feature.

The online information vacuum

The Bill requires platforms to determine illegality on the basis of information reasonably available to them. But how much (or little) information is that likely to be?  

Platforms will be required to make decisions on illegality in a comparative knowledge vacuum. The paucity of information is most apparent in the case of proactive, automated real time filtering. A system can work only on user content that it has processed, which inevitably omits extrinsic contextual information. 

For many offences, especially those in which defences such as reasonable excuse bear the main legality burden, such absent contextual information would otherwise be likely to form an important, even decisive, part of determining whether an offence has been committed. 

For both of these reasons the Bill’s approach to online would inevitably lead to compulsory filtering and removal of legal online content at scale, in a way that has no counterpart offline. It is difficult to see how a requirement on platforms to have regard (or particular regard, as a proposed government amendment would have it) to the importance of protecting users’ right to freedom of expression within the law could act as an effective antidote to express terms of the legislation that spell out how platforms should adjudge illegality.

Online prior restraint

These two features exist against the background that the illegality duty is a form of prior restraint: the Bill requires content filtering and removal decisions to be made before any fully informed, fully argued decision on the merits takes place (if it ever would). A presumption against prior restraint has long formed part of the English common law and of human rights law. For online, no longer.



Friday, 25 November 2022

How well do you know the Online Safety Bill?

With the Online Safety Bill returning to the Commons next month, this is an opportune moment to refresh our knowledge of the Bill.  The labels on the tin hardly require repeating: children, harm, tech giants, algorithms, trolls, abuse and the rest. But, to beat a well-worn drum, what really matters is what is inside the tin. 

Below is a miscellany of statements about the Bill: familiar slogans and narratives, a few random assertions, some that I have dreamed up to tease out lesser-known features. True, false, half true, indeterminate? Click on the expandable text to find out.  

The Bill makes illegal online what is illegal offline.
No. We have to go a long way to find a criminal offence that does not already apply online as well as offline (other than those such as driving a car without a licence, which by their nature can apply only to the physical world). One of the few remaining anomalies is the paper-only requirement for imprints on election literature – a gap that will be plugged when the relevant provisions of the Elections Act 2022 come into force.

Moreover, in its fundamentals the Bill departs from the principle of online-offline equivalence. Its duties of care are extended in ways that have no offline comparable. It creates a broadcast-style Ofcom regulatory regime that has no counterpart for individual speech offline: regulation by discretionary regulator rather than by clear, certain, general laws.

The real theme underlying the Bill is far removed from offline-online equivalence. It is that online speech is different from offline: more reach, more persistent, more dangerous and more in need of a regulator’s controlling hand.

Under the Bill's safety duty, before removing a user's post a platform will have to be satisfied to the criminal standard that it is illegal.
No. The current version of the Bill sets ‘reasonable grounds to infer’ as the platform’s threshold for adjudging illegality.

Moreover, unlike a court that comes to a decision after due consideration of all the available evidence on both sides, a platform will be required to make up its (or its algorithms') mind about illegality on the basis of whatever information is available to it, however incomplete that may be. For proactive monitoring of ‘priority offences’, that would be the user content processed by the platform’s automated filtering systems. The platform would also have to ignore the possibility of a defence unless they have reasonable grounds to infer that one may be successfully relied upon.

The mischief of a low threshold is that legitimate speech will inevitably be suppressed at scale under the banner of stamping out illegality. In a recent House of Lords debate Lord Gilbert, who chaired the Lords Committee that produced a Report on Freedom of Expression in the Digital Age, asked whether the government had considered a change in the standard from “reasonable grounds to believe” to “manifestly illegal”.  The government minister replied by referring to the "reasonable grounds to infer" amendment, which he said would protect against both under-removal and over-removal of content.

The Bill will repeal the S.127 Communications Act 2003 offences.
Half true. Following a recommendation by the England and Wales Law Commission the Bill will replace both S.127 (of Twitter Joke Trial notoriety) and the Malicious Communications Act 1988 with new offences, notably sending a harmful communication.

However, the repeal of S.127 is only for England and Wales. S.127 will continue in force in Scotland. As a result, for the purposes of a platform’s illegality safety duty the Bill will deem the remaining Scottish S.127 offence to apply throughout the UK. So in deciding whether it has reasonable grounds to infer illegality a platform would have to apply both the existing S.127 and its replacement. [Update: the government announced on 28 November 2022 that the 'grossly offensive' offences under S.127(1) and the MCA 1988 will no longer be repealed, following its decision to drop the new harmful communications offence.] 

A platform may be required to adjudge whether a post causes spiritual injury.
True. The National Security Bill will create a new offence of foreign interference. One route to committing the offence involves establishing that the conduct involves coercion. An example of coercion is given as “causing spiritual injury to, or placing undue spiritual pressure on, a person”.

The new offence would be designated as a priority offence under the Online Safety Bill, meaning that platforms would have to take proactive steps to prevent users encountering such content.

A platform may be required to adjudge whether a post represents a contribution to a matter of public interest.
True. The new harmful communications offence (originating from a recommendation by the Law Commission) provides that the prosecution must prove, among other things, that the sender has no reasonable excuse for sending the message. Although not determinative, one of the factors that the court must consider (if it is relevant in a particular case) is whether the message is, or is intended to be, a contribution to a matter of public interest.

A platform faced with a complaint that a post is illegal by virtue of this offence would be put in the position of making a judgment on public interest, applying the standard of whether it has reasonable grounds to infer illegality. During the Commons Committee stage the then Digital Minister Chris Philp elaborated on the task that a platform would have to undertake. It would, he said, perform a "balancing exercise" in assessing whether the content was a contribution to a matter of public interest. [Update: the government announced on 28 November 2022 that the proposed new harmful communications offence will be dropped.]

The House of Lords Communications and Digital Committee Report on Freedom of Speech in the Digital Age contains the following illuminating exchange: 'We asked the Law Commission how platforms’ algorithms and content moderators could be expected to identify posts which would be illegal under its proposals. Professor Lewis told us: “We generally do not design the criminal law in such a way as to make easier the lives of businesses that will have to follow it.”' However, it is the freedom of speech of users, not businesses, that is violated by the arbitrariness inherent in requiring platforms to adjudge vague laws.

Platforms would be required to filter users’ posts.
Highly likely, at least for for some platforms. All platforms would be under a duty to take proportionate proactive steps to prevent users encountering priority illegal content, and (for services likely to be accessed by children) to prevent children from encountering priority content harmful to children. The Bill gives various examples of such steps, ranging from user support to content moderation, but the biggest clues are in the Code of Practice provisions and the enforcement powers granted to Ofcom.

Ofcom is empowered to recommend in a Code of Practice (if proportionate for a platform of a particular kind or size) proactive technology measures such as algorithms, keyword matching, image matching, image classification or behaviour pattern detection in order to detect publicly communicated content that is either illegal or harmful to children. Its enforcement powers similarly include use of proactive technology. Ofcom would have additional powers to require accredited proactive technology to be used in relation to terrorism content and CSEA (including, for CSEA, in relation to private messages).

The Bill regulates platforms, not users.
False dichotomy. The Bill certainly regulates platforms, but does so by pressing them into service as proxies to control content posted by users. The Bill thus regulates users at one remove. It also contains new criminal offences that would be committed directly by users.

The Bill outlaws hurting people's feelings.
No, but the new harmful communications offence comes close. It would criminalise sending, with no reasonable excuse, a message carrying a real and substantial risk that it would cause psychological harm - amounting to at least serious distress - to a likely member of the audience, with the intention of causing such harm. There is no requirement that the response of a hypothetical seriously distressed audience member should be reasonable. One foreseeable hypersensitive outlier is enough. Nor is there any requirement to show that anyone was actually seriously distressed.

The Law Commission, which recommended this offence, considered that it would be kept within bounds by the need to prove intent to cause harm and the need to prove lack of reasonable excuse, both to the criminal standard. However, the standard to which platforms will operate in assessing illegality is reasonable grounds to infer[Update: the government announced on 28 November 2022 that the proposed new harmful communications offence will be dropped.]

The Bill also refers to psychological harm in other contexts, but without defining it further. The government intends that psychological harm should not be limited to a medically recognised condition.

The Bill recriminalises blasphemy.
Quite possibly. Blasphemy was abolished as a criminal offence in England and Wales in 2008 and in Scotland in 2021. The possible impact of the harmful communications offence (see previous item) has to be assessed against the background that people undoubtedly exist who experience serious distress (or at least claim to do so) upon encountering content that they regard as insulting to their religion. [Update: the government announced on 28 November 2022 that the proposed new harmful communications offence will be dropped.]

The Bill is all about Big Tech and large social media companies.
No. Whilst the biggest “Category 1” services would be subject to additional obligations, the Bill’s core duties would apply to an estimated 25,000 UK service providers from the largest to the smallest, and whether or not they are run as businesses. That would include, for instance, discussion forums run by not-for-profits and charities. Distributed social media instances operated by volunteers also appear to be in scope.

The Bill is all about algorithms that push and amplify user content.
No. The Bill makes occasional mention of algorithms, but the core duties would apply regardless of whether a platform makes use of algorithmic curation. A plain vanilla discussion forum is within scope.

The Secretary of State can instruct Ofcom to modify its Codes of Practice.
True. Section 40 of the Bill empowers the Secretary of State to direct OFCOM to modify a draft code of practice if the Secretary of State believes that modifications are required (a) for reasons of public policy, or (b) in the case of a terrorism or CSEA code of practice, for reasons of national security or public safety. The Secretary of State can keep sending the modified draft back for further modification.

A platform will be required to remove content that is legal but harmful to adults.
No. The legal but harmful to adults duty (should it survive in the Bill) applies only to Category 1 platforms and on its face only requires transparency. Some have argued that its effect will nevertheless be heavily to incentivise Category 1 platforms to remove such content. [Update: the government announced on 28 November 2022 that the legal but harmful to adults duty will be dropped.]

The Bill is about systems and processes, not content moderation.
False dichotomy. Whilst the Bill's illegality and harm to children duties are couched in terms of systems and processes, it also lists measures that a service provider is required to take or use to fulfil those duties, if it is proportionate to do so. Content moderation, including taking down content, is in the list. It is no coincidence that the government’s Impact Assessment estimates additional moderation costs over a 10 year period at nearly £2 billion.

Ofcom could ban social media quoting features.
Indeterminate. Some may take the view that enabling social media quoting encourages toxic behaviour (the reason why the founder of Mastodon did not include a quote feature). A proponent of requiring more friction might argue that it is the kind of non-content oriented feature that should fall within the ‘safety by design’ aspects of a duty of care - an approach that some regard as preferable to moderating specific content.

Ofcom deprecation of a design feature would have to be tied to some aspect of a safety duty under the Bill and perhaps to risk of physical or psychological harm. There would likely have to be evidence (not just an opinion) that the design feature in question contributes to a relevant kind of risk within the scope of the Bill. From a proportionality perspective, it has to be remembered that friction-increasing proposals typically strike at all kinds of content: illegal, harmful, legal and beneficial.  

Of course the Bill does not tell us which design features should or should not be permitted. That is in the territory of the significant discretion (and consequent power) that the Bill places in the hands of Ofcom. If it were considered to be within scope of the Bill and proportionate to deprecate a particular design feature, in principle Ofcom could make a recommendation in a Code of Practice. That would leave it to the platform either to comply or to explain how it satisfied the relevant duty in some other way. Ultimately Ofcom could seek to invoke its enforcement powers.

The Bill will outlaw end to end encryption.
Not as such, but... . Ofcom will be given the power to issue a notice requiring a private messaging service to use accredited technology to scan for CSEA material. A recent government amendment to the Bill provides that a provider given such a notice has to make such changes to the design or operation of the service as are necessary for the technology to be used effectively. That opens the way to requiring E2E encryption to be modified if it is incompatible with the accredited technology - which might, for instance, involve client-side scanning.  Ofcom can also require providers to use best endeavours develop or source their own scanning technology.

The government’s response to the Pre-legislative Scrutiny Committee is also illuminating: “End-to-end encryption should not be rolled out without appropriate safety mitigations, for example, the ability to continue to detect known CSEA imagery.” 

The press are exempt.
True up to a point, but it’s complicated.

First, user comments under newspaper and broadcast stories are intended to be exempt as ‘limited functionality’ under Schedule 1 (but the permitted functionality is extremely limited, for instance apparently excluding comments on comments).

Second, platforms' safety duties do not apply to recognised news publisher content appearing on their services. However, many news and other publishers will fall outside the exemption. 

Third, various press and broadcast organisations are exempted from the new harmful and false communications offences created by the Bill. 
[Update: the government announced on 28 November 2022 that the proposed new harmful communications offence will be dropped.]

[Updated 3 December 2022 to take account of the government announcement on 28 November 2022.]

Wednesday, 2 November 2022

On the Dotted Line

The topic of electronic signatures seems cursed to eternal life. In the blue corner we have the established liberal English law approach to signatures, which eschews formality and emphasises intention to authenticate. In the red corner we have preoccupation with verifying identity of the signatory, with technically engineered digital signatures and with the EU’s eIDAS hierarchy of qualified, advanced and ordinary electronic signatures.

In the English courts the blues have it. Judges have upheld the validity of electronic signatures as informal as signing a name at the end of an e-mail or even, in one case, clicking an ‘I accept’ button on an electronic form. They have been able to do this partly because, with very few exceptions, the England and Wales legislature has refrained from stipulating use of an eIDAS-compliant qualified or advanced signature as a condition of validity. The EIDAS hierarchy does form part of our law, but – rather like the Interpretation Act - in the guise of a toolkit that is available to be used or not as the legislature wishes. The toolkit has for the most part remained on the legislative shelf.

The potential consequences of stipulating eIDAS-style formalities in legislation are graphically illustrated by the Austrian case of the Wrong Kind of Signature. A €3bn contract to supply double-decker trains to Austrian Federal Railways was invalidated because the contract was signed with a qualified electronic signature supported by a Swiss, rather than an EU, Trusted Service Provider.

The modern English law aversion to imposition of formalities was pithily encapsulated in an official committee report of 1937, describing the Statute of Frauds:

““'The Act', in the words of Lord Campbell . . . 'promotes more frauds than it prevents'. True it shuts out perjury; but it also and more frequently shuts out the truth. It strikes impartially at the perjurer and at the honest man who has omitted a precaution, sealing the lips of both. Mr Justice FitzJames Stephen ... went so far as to assert that 'in the vast majority of cases its operation is simply to enable a man to break a promise with impunity, because he did not write it down with sufficient formality.’ ”

For its part eIDAS continues to complicate and confound. February’s Interim Report of the Industry Working Group on the Electronic Execution of Documents, running to 94 pages of discussion, stated that ‘only’ qualified electronic signatures have equivalent legal status to handwritten signatures (meaning, according to the Report, that they carry a presumption of authenticity). Yet while eIDAS does require equivalent legal effect (whatever that may mean) to be accorded to qualified signatures, it does not require other kinds of electronic signature to be denied that status; nor has English domestic law done so.

Back in the courts, a recent decision of Senior Costs Judge Gordon-Saker in Elias v Wallace LLP [2022] EWHC 2574 (SCCO) continues down the road of upholding the validity of informal electronic signatures. Under the Solicitors Act 1974 (as amended) a solicitor’s bill cannot be enforced by legal proceedings unless it complies with certain formalities, including that it has to be:

“(a) signed by the solicitor or on his behalf by an employee of the solicitor authorised by him to sign, or

(b) enclosed in, or accompanied by, a letter which is signed as mentioned in paragraph (a) and refers to the bill.”

The Act states that the signature may be an electronic signature. It takes its definition of electronic signature from s.7(2) of the Electronic Communications Act 2000[1], as amended:  

“… so much of anything in electronic form as –

(a)   is incorporated into or otherwise logically associated with any electronic communication or electronic data; and

(b)   purports to be used by the individual creating it to sign.”

This is an unusual example of English legislation stipulating compliance with a defined kind of signature (albeit that S.7(2) is framed in very broad terms) as a condition of validity. Most legislation requiring a signature goes no further than a generally stated requirement that the document must be signed[2].

The bills in question were sent to the solicitor’s client as e-mail attachments. The bills themselves were not signed, but the covering e-mails concluded with the words:

“Best regards,

Alex

[first name and surname]

Partner

[telephone numbers, firm name and physical and website addresses]”.

The judge held:

  1. The printed name of the firm incorporated in the invoice, like a letterheading, was not a signature. This unsurprising conclusion is reminiscent of Mehta v J Pereira Fernandes SA [2006] EWHC 813 in which the same was held for an e-mail address appearing at the top of an e-mail.
  2. If the name ‘Alex’ was not generated automatically, clearly it purported to be used as a signature.
  3. If the name ‘Alex’ was auto-generated, then on the authority of Neocleous v Rees that would constitute a signature. The e-mail footer was clearly applied with authenticating intent, even if it was the product of a rule.

The judge also held that ‘letter’ should be interpreted to include e-mail. That is a salutary reminder that the ability to conduct a transaction electronically may not be only a question of whether electronic signatures are permissible. Other requirements of form and process can also come into play.

[1] Note that the role of S.7 was to make explicit (almost certainly unnecessarily) that electronic signatures as defined by the section were admissible as evidence, whereas the Solicitors Act provision concerns substantive validity.

[2] As to which, see the England and Wales Law Commission’s Statement of the Law in its Report on Electronic Execution of Documents (2019).



Thursday, 18 August 2022

Reimagining the Online Safety Bill

“The brutal truth is that nothing is likely to trip up the Online Safety Bill.” So began a blogpost on which I was working just over a month ago. Fortunately, it was still unfinished when Boris Johnson imploded for the final time, the Conservative leadership election was triggered, and candidates – led by Kemi Badenoch - started to voice doubts about the freedom of speech implications of the Bill. Then the Bill’s Commons Report stage was put on hold until the autumn, to allow the new Prime Minister to consider how to proceed.

The resulting temporary vacuum has sucked in commentary from all sides, whether redoubled criticisms of the Bill, renewed pursuit of existing agendas, or alarm at the prospect of further delays to the legislation.

Delay, it should be acknowledged, was always hardwired into the Bill. The Bill’s regulatory regime, even at a weighty 218 pagesis a bare skeleton. It will have to be fleshed out by a sequence of secondary legislation1, Ofcom codes of practice2, Ofcom guidance3, and designated categories of service providers - each with its own step by step procedures. That kind of long drawn-out process was inevitable once the decision was taken to set up a broadcast-style regime under the auspices of a discretionary regulator such as Ofcom. 

In July 2022 Ofcom published an implementation road-map that would result in the earliest aspect of the regulatory regime (illegality safety duties) going live in mid-2024. We have to wonder whether that would have proved to be optimistic even without the current leadership hiccup and – presumably - a period of reflection before the Bill can proceed further.

The Bill has the feel of a social architect’s dream house: an elaborately designed, exquisitely detailed (eventually), expensively constructed but ultimately uninhabitable showpiece; a showpiece, moreover, erected on an empty foundation: the notion that a legal duty of care can sensibly be extended beyond risk of physical injury to subjectively perceived speech harms. 

As such, it would not be surprising if, as the Bill proceeded, implementation were to recede ever more tantalisingly out of reach. As the absence of foundations becomes increasingly exposed, the Bill may be in danger not just of delay but of collapsing into the hollow pit beneath, leaving behind a smoking heap of internal contradictions and unsustainable offline analogies.

If, under a new Prime Minister, the government were to reimagine the Online Safety Bill, how might they do it? Especially, how might they achieve a quick win: a regime that could be put into effect immediately, rather than the best part of two years later - if ever? 

The most vulnerable part of the Bill is probably the ‘legal but harmful to adults’ provisions4. However, controversial as they undoubtedly are, those are far from the most problematic features of the Bill.

Here are some other aspects that might be under the spotlight.

The new communications offences

The least controversial part of the Bill ought to be the new Part 10 criminal offences. Those could, presumably, come into force shortly after Royal Assent. However, some of them badly need fixing.

The new communications offences5 have been designed to replace the Malicious Communications Act 1988 and the notorious S.127 Communications Act 2003. They have the authority of the Law Commission behind them.

Unfortunately, the new offences are a mess. The harmful communications offence6, in particular, will plausibly create a veto for those most readily distressed by encountering views that they regard as deeply repugnant, even if that reaction is unreasonable. That prospect, and the consequent risk of legal online speech being chilled or removed, is exacerbated when the offence is combined with the illegality duty that the Bill, in its present form, would impose on all U2U platforms and search engines.

Part 10 of the Bill also has the air of unfinished business, with calls for further new offences such as deliberately sending flashing images to epileptics.

Make it about safety?

The 2017 Green Paper that started all this was entitled Internet Safety Strategy. Come the April 2019 White Paper, that had metamorphosed into Online Harms. Some have criticised the Bill’s reversion to Online Safety, although in truth the change is more label than substance. It does, however, prompt the question whether a desire for some quick wins would be served by focusing the Bill, in substance as well as in name, on safety in its core sense.

That is where much of the original impetus for the Bill stemmed from. Suicide, grooming, child abuse, physically dangerous ‘challenges’, violence – these are the stuff of safety-related duties of care. It is well within existing duty of care parameters to consider whether a platform has done something that creates or exacerbates a risk of physical injury as between users; then, whether a duty of care should be imposed; and if so, a duty to take what kind of steps (preventative or reactive) and in what circumstances. Some kinds of preventative duty, however, involve the imposition of general monitoring obligations, which are controversial.  

A Bill focused on safety in its core sense – risk of physical injury - might usefully clarify and codify, in the body of the Bill, the contents of such a duty of care and the circumstances in which it would arise. A distinction might, for example, be drawn between positively promoting an item of user content, compared with simply providing a forum akin to a traditional message board or threading a conversation. 

Duties of care are feasible for risk of physical injury, because physical injury is objectively identifiable. Physical injuries may differ in degree, but a bruise and a broken wrist are the same kind of thing. We also have an understanding of what gives rise to risk of physical injury, be it an unguarded lathe or a loose floorboard. 

The same is not true for amorphous conceptions of harm that depend on the subjective perception of the person who encounters the speech in question. Speech is not a tripping hazard.  Broader harm-based duties of care do not work in the same way, if at all, for controversial opinions, hate, blasphemy, bad language, insults, and all the myriad kinds of speech that to a greater or lesser extent excite condemnation, inflame emotions, or provoke anger, distress and assertions of risk of suffering psychological harm. 

A subjective harm-based duty of care requires the platform to assess and weigh those considerations against the freedom of speech not only of the poster, but of all other users who may react differently to the same speech, then decide which should prevail. That is a fundamentally different exercise from the assessment of risk of physical injury that underpins a safety-related duty of care. An approach that assumes that risk of subjectively perceived speech harms can be approached in the same way as risk of objectively identifiable physical injury will inevitably end up floundering in the kind of morass in which the Bill now finds itself.

The difference from risk of physical injury was, perhaps unwittingly, illustrated in the context of the illegality duty by the then Digital Minister Chris Philp in the Bill’s Commons Committee stage. He was discussing the task that platforms would perform in deciding whether user content was illegal under the new ‘harmful communications’ offence (above). The platform would, he said, perform a balancing exercise in assessing whether the content was a contribution to a matter of public interest. No balancing exercise is necessary to determine whether a broken wrist is or is not a physical injury.

Again within the illegality duty, the new foreign interference offence under Clause 13 of the National Security Bill would be designated as a priority offence under the Online Safety Bill. That would require platforms to adjudge, among other things, risk of “spiritual injury”. 

The principled way to address speech considered to be beyond the pale is for Parliament to make clear, certain, objective rules about it – whether that be a criminal offence, civil liability on the user, or a self-standing rule that a platform is required to apply. Drawing a clear line, however, requires Parliament to give careful consideration not only to what should be caught by the rule, but to what kind of speech should not be caught, even if it may not be fit for a vicar’s tea party. Otherwise it draws no line, is not a rule and fails the rule of law test: that legislation should be drawn so as to enable anyone to foresee, with reasonable certainty, the consequences of their proposed action. 

Rethink the illegality duty?7

Requiring platforms to remove illegal user content sounds simple, but isn’t. During the now paused Commons Report Stage debate on the Bill, Sir Jeremy Wright QC (who ironically, was the Secretary of State for Culture at the time when the White Paper was launched in April 2019), observed:

“When people first look at this Bill, they will assume that everyone knows what illegal content is and therefore it should be easy to identify and take it down, or take the appropriate action to avoid its promotion. …

… criminal offences very often are not committed just by the fact of a piece of content; they may also require an intent, or a particular mental state, and they may require that the individual accused of that offence does not have a proper defence to it.

The question of course is how on earth a platform is supposed to know either of those two things in each case.”   

He might also have added that the relevant factual material for any given offence will often include information that is outside anything of which the platform can have knowledge, especially for real-time automated filtering systems8.

In any event, it is pertinent to ask how many offences exist for which illegality can be determined with confidence simply by looking at the content itself and nothing else? Illegality often requires assessment of intention (sometimes, but not always, intention can be inferred from the content), purpose, or of extrinsic factual information.  The Bill now contains an illuminating, but ultimately unsatisfactory, attempt (New Clause 14) to address these issues.

The underlying problem with applying the duty of care concept to illegality is that illegality is a complex legal construct, not an objectively ascertainable fact like physical injury. Adjudging its existence (or risk of such) requires both factual information (often contextual) and interpretation of the law. There is a high risk that legal content will be removed, especially for real time filtering at scale. For this reason, it is strongly arguable that human rights compliance requires a high threshold to be set for content to be assessed as illegal.

Given the increasingly (if belatedly) apparent problems with the illegality duty, what options might a government coming to it with a fresh eye consider? The current solution, as with so many problematic aspects of the Bill, is to hand it off to Ofcom. New Clause 15 would require Ofcom to produce guidance on how platforms should go about adjudging illegality in accordance with NC 14.

Assuming that the illegality duty were not dropped altogether, other possibilities might include:

  • Restrict any duty to offences where the existence of an offence (including any potentially available defences) is realistically capable of being adjudged on the face of the content itself with no further information9.
  • For in-scope offences, raise the illegality determination threshold from reasonable grounds to infer to manifest illegality10.

Steps of this kind might in any event be necessary to achieve ECHR compliance. They would also reflect broader traditions of protection of freedom of speech, such as the presumption against prior restraint.

Illegality across the UK11

Another consequence of illegality being a legal construct is that criminal offences vary across the UK. The Bill requires a platform, when preventing or removing user content under its illegality duty, to treat a criminal offence in one part of the UK as if it applied to the whole of the UK. This has the bizarre consequence that platforms will, for instance, have to apply in parallel both the new harmful communications offence contained in the Bill and its repealed predecessor, S.127 of the Communications Act 2003. 

Why is that? Because S.127 will be repealed only for England and Wales and will remain in force in Scotland and Northern Ireland. Platforms would have to treat S.127 as if it still applied in England and Wales; and, conversely, the new England and Wales harmful communications offence as if it applied in Scotland and Northern Ireland. 

In Committee on 14 June 2022 the then Minister confirmed that: 

“…the effect of the clauses is a levelling up—if I may put it that way. Any of the offences listed effectively get applied to the UK internet, so if there is a stronger offence in any one part of the United Kingdom, that will become applicable more generally via the Bill.”

Of course the alternative of requiring platforms to undertake the (in reality impossible) task of deciding which UK law – England and Wales, Northern Ireland or Scotland - applied to which post or tweet, would hardly be less problematic.

At Report stage the government added an amendment to the Bill12 which would, for the future, mean that the non-priority illegality duty would apply only to offences enacted by the UK Parliament, or by devolved administrations with the consent of the Westminster government. If nothing else, that shines a brighter spotlight on the problem.   

The role of Ofcom

The most radical option, were the government looking for a legislative quick win that cuts out delay, would be to jettison Ofcom and its companion two year procession of secondary legislation, guidance and codes of practice. 

In truth, as a matter of principle it was always a bad idea to apply discretionary broadcast-style regulation to individual speech. The way to govern individual speech is with clear, certain laws of general application. A government so minded might consider aligning practicality with principle.

If Ofcom’s role were to be scrapped, what could replace it? One alternative might be to take existing common law duties of care relating to risk of physical injury as the starting point, clarify and codify them on the face of the Bill (not by secondary legislation), and provide for enforcement otherwise than through a regulator. 

That would require the scope and content of any duties under the Bill to be articulated, Goldilocks-style, to a reasonable level of clarity: not so abstract as to be vague, not so technology and business model-specific as to be unworkable, but just right. Granted, that is easier said than done; but still perhaps more achievable than attempting to launch the overladen supertanker that is now marooned in its passage through Parliament. 

This approach would, however, raise questions about how some kinds of duty could be made compatible with the hosting protections originating in the EU ECommerce Directive, to which the government remains committed (unlike the prohibition on general monitoring obligations, from which the government has distanced itself). 

There would then be the question of Ofcom’s proposed powers to issue notices requiring providers to use approved scanning and filtering technology13. Those powers are at best controversial, raising a plethora of issues of their own.  If such powers were to be continued in some form, they could be a candidate for separate legislation, so that issues such as impact on privacy and end-to-end encryption could be brought out into the open and given the full debate that they deserve. 

Other parts of the Bill

There is much more to the Bill than the parts discussed so far: fraudulent advertising14 and age-verification of pornographic websites15 have Parts of the Bill to themselves. There are numerous children-related provisions16, which to an extent overlap with the ICO Code of Practice on age-appropriate design and the currently mothballed Digital Economy Act 2017. Other aspects of the Bill include press exemptions promised at the time of the White Paper17(which always looked likely to be undeliverable and are still the subject of heavy debate); and the provisions constraining how Category 1 platforms can treat journalism and content of democratic importance18

These are all crafted on the basis of a regulatory regime operated by Ofcom. It would not be a simple matter to disentangle them from Ofcom, should the government contemplate a non-Ofcom fast track.

Online ASBIs

A refocused Bill could face the objection that it does not address some of the most unpleasant, yet currently legal, user behaviour that can be found online. That does not rule out the possibility of legislation that on its face (not consigned to secondary legislation) draws clear lines that may differ from those that apply today.  But if Parliament is unwilling or unable to draw clear lines to govern behaviour regarded as beyond the pale, what other possibilities exist?

One answer, albeit itself somewhat controversial, is already sitting on the legislative shelf, but (as far as can be seen) appears in the online context to be gathering dust.

The Anti-Social Behaviour, Crime and Policing Act 2014 contains a procedure for some authorities to obtain a civil anti-social behaviour injunction (ASBI, the successor to ASBOs) against someone who has engaged or threatens to engage in anti-social behaviour, meaning “conduct that has caused, or is likely to cause, harassment, alarm or distress to any person”. That succinctly describes online disturbers of the peace, albeit in very broad terms. It maps readily on to the most egregious abuse, cyberbullying, harassment and the like.

The Home Office Statutory Guidance on the use of the 2014 Act powers (revised in December 2017, August 2019 and June 2022) makes no mention of their use in relation to online behaviour. Yet nothing in the legislation restricts an ASBI to offline activities. Indeed, over 10 years ago The Daily Telegraph reported an 'internet ASBO' made under predecessor legislation against a 17 year old who had been posting material on the social media platform Bebo. The order banned him from publishing material that was threatening or abusive and promoted criminal activity.

ASBIs raise difficult questions of how they should be framed and of proportionality. Some may have concerns about the broad terms in which anti-social behaviour is defined in the legislation. Nevertheless, the courts to which applications are made should, at least in principle, have the societal and institutional legitimacy, as well as the experience and capability, to weigh such factors.

That said, the July 2020 Civil Justice Council Report “Anti-Social Behaviour and the Civil Courts” paints a somewhat dispiriting picture of the use of ASBIs offline. It highlights a practice of applying for orders ex parte – something that would be especially troubling for an ASBI that would affect the defendant’s freedom of expression. Concerns of that kind would have to be carefully addressed if online ASBIs were to be picked up and dusted off.

On the positive side, the usefulness of online ASBIs could be transformed if the government were to explore the possibility of extending beyond the official authorities the ability to apply to court for an online ASBI , for instance to selected voluntary organisations.

Finally, for a longer-term view of access to justice online, Section 9 of my submission to the Online Harms White Paper consultation has some blue-sky thoughts.

Footnotes (references are to the Bill sections as at the Commons Report stage, and to amendments and new clauses (NC) adopted during Report Stage on 22 July 2022 but not yet published as an amended Bill.)

1 Required secondary legislation S.53: priority content harmful to children, primary priority content harmful to children; S.54: priority content harmful to adults; NB S.55: regulation-making; S.56: Ofcom review; S.60: reports to National Crime Authority; S.81/Sch 11: service provider categorisation; S.98: overseas regulators; SS.141 and 142: super-complaints.  

2 Required Ofcom codes of practice  S.37/Sch 4: Terrorism content, CSEA content, other duties (illegal content, children’s online safety, adults’ online safety, user empowerment, content of democratic importance, journalistic content, content reporting, complaints procedures); fraudulent advertising (Cat 1 and 2A providers).  Code of practice measures must be compatible with pursuit of specified online safety objectives (Sch 4 para 3, or as amended by regulations). Draft codes of practice are subject to modification under Secretary of State’s power of direction (S.40(1)).

3 Required Ofcom guidance  S.48 (as amended at Report stage): service provider record-keeping, review and children’s risk assessments, service provider protection of news publisher content; S.58 (Cat 1 services): offer to users of identity verification; S.65 (Cat 1, 2A and 2B services): transparency reports; S.69 (regulated pornographic content providers): compliance with duties; S.85: illegal content risk assessments, children’s risk assessments, Cat 1 service adults’ risk assessments; S.130: enforcement action; S.143: super-complaints; NC15: service provider illegality judgements. Also note S.84: Required Ofcom risk assessment, risks register and risk profiles for illegal content, content harmful to children, content harmful to adults.

4 Legal but harmful to adults (Cat 1 services) S.12: adults risk assessment; S.13: transparency and other duties; S.14: user empowerment; S.17(5): user content reporting; S.18(6): complaints procedures; S.54: meanings of content harmful to adults and priority content harmful to adults; S.55: regulations designating priority content harmful to adults; S.56: Ofcom review; S.64/Sch 8 (Cat 1, 2A and 2B services): transparency reports; S.81/Sch 11 (service provider categorisation); S.84: Ofcom risk assessment, risks register and risk profiles; S.187: definition of ‘harm’ as physical or psychological harm.

5 New communications offences S.151 (harmful communications); S.152 (false communications); S.153 (threatening communications); S.154 (interpretation); S.155 (extraterritorial reach); S.156 Liability of corporate officers. S.151 and 152 make use of problematic ‘likely audience’ tests. S.153 ought to be uncontroversial but has adopted wider language than the Law Commission’s recommendation, resulting in possible overreach (discussed here).

6 Harmful communications offence S.151.

7 Illegality duty S.9 (U2U services), S.24 (search services).

8 Real-time automated filtering systems S.9(3)(a) and (b); cf also S.9(2); S.24(3)(a); cf also S.24(2); S.104: accredited technology (terrorism content, CSEA content); S.117: proactive technology requirement (illegal content, children’s online safety, fraudulent advertising).

9 Capability of adjudging illegality on the face of content alone This would involve review of at least the priority offences designated in Sch 7.

10 Manifest illegality This would involve reconsideration of NC 14. There is uncertainty in the Bill about whether, and if so how far, a provider would be expected to go looking for information in order to determine whether there were reasonable grounds to infer an offence (para 10 of the Government Fact Sheet suggests that this would be left to Ofcom guidance). This seems most likely to be relevant to the reactive duties specified in S.9(3)(c) and S.24(3)(b) rather than to real time automated monitoring and filtering (S.9(3)(a) and (b); cf also S.9(2); S.24(3)(a)).

11 Illegality applied across the UK  S.52(9) and (12).

12 Future devolved offences Amendment 94.

13 Scanning and filtering technology powers  S.104: use of accredited technology (terrorism content, CSEA content); S.117: inclusion of proactive technology requirements in Ofcom confirmation decisions (illegal content, children’s online safety, fraudulent advertising).

14 Fraudulent advertising (Cat 1 and Cat 2A services) SS. 34 and 35.

15 Pornographic site age verification SS.66 to 68.

16 Children-related provisions For service providers the main duties for content harmful to children are set out in S.31: children’s access assessments; SS.10 and 25: children’s risk assessments; SS.11 and 26: children’s safety duties; S.53: primary priority, priority and non-designated content harmful to children; S.187: definition of ‘harm’ as physical or psychological harm.  

17 News publisher content exemptions  S.49(2)(g) and S.51(2)(b): exclusion from scope of safety duties; NC19 (Cat 1 services): duties to protect news publisher content. S.49 (2)(e) comments and reviews on provider content. Recognised news publishers are also among those exempted from two of the three new communications offences: S.151(6)(a) (harmful communications); S.152(4)(a) (false communications).

18 Journalism and content of democratic importance (Cat 1 services) S.16: journalistic content; S.15: content of democratic importance.

[Typo (2009) corrected to 2019, 19 Aug 2022. Footnotes added 9 Oct 2022.]



Saturday, 30 July 2022

Platforms adjudging illegality – the Online Safety Bill’s inference engine

The Online Safety Bill, before the pause button was pressed, enjoyed a single day’s Commons Report stage debate on 12 July 2022.  Several government amendments were passed and incorporated into the Bill.

One of the most interesting additions is New Clause 14 (NC14), which stipulates how user-to-user providers and search engines should decide whether user content constitutes a criminal offence. This was previously an under-addressed but nevertheless deep-seated problem for the Bill’s illegality duty. 

One underlying issue is that (especially for real-time proactive filtering) providers are placed in the position of having to make illegality decisions on the basis of a relative paucity of information, often using automated technology. That tends to lead to arbitrary decision-making.

Moreover, if the threshold for determining illegality is set low, large scale over-removal of legal content will be baked into providers’ removal obligations. But if the threshold is set high enough to avoid over-removal, much actually illegal content may escape. Such are the perils of requiring online intermediaries to act as detective, judge and bailiff.

NC 14 looks like a response to concerns raised in April 2022 by the Independent Reviewer of Terrorism Legislation over how a service provider’s illegality duty would apply to terrorism offences, for which (typically) the scope of acts constituting an offence is extremely broad. The most significant limits on the offences are set by intention and available defences – neither of which may be apparent to the service provider. As the Independent Reviewer put it:

“Intention, and the absence of any defence, lie at the heart of terrorism offending”.

He gave five examples of unexceptional online behaviour, ranging from uploading a photo of Buckingham Palace to soliciting funds on the internet, which if intention or lack of a defence were simply assumed, would be caught by the illegality duty. He noted:

“It cannot be the case that where content is published etc. which might result in a terrorist offence being committed, it should be assumed that the mental element is present, and that no defence is available. Otherwise, much lawful content would “amount to” a terrorist offence.”

If, he suggested, the intention of the Bill was that inferences about mental element and lack of defence should be drawn, then the Bill ought to identify a threshold. But if the intention was to set the bar at ‘realistic to infer’, that:

“does not allow sufficiently for freedom of speech. It may be “realistic” but wholly inaccurate to infer terrorist intent in the following words: “I encourage my people to shoot the invaders””

Issues of this kind are not confined to terrorism offences. There will be other offences for which context is significant, or where a significant component of the task of keeping the offence within proper bounds is performed by intention and defences.

Somewhat paraphrased, the answers provided to service providers by NC 14 are:

  • Base your illegality judgement on whatever relevant information you or your automated system have reasonably available.
  • If you have reasonable grounds to infer that all elements of the offence (including intention) are present, that is sufficient unless you have reasonable grounds to infer that a defence may be successful.
  • If an item of content surmounts the reasonable grounds threshold, and you do not have reasonable grounds to infer a defence, then you must treat it as illegal.

Factors relevant to reasonable availability of content include the size and capacity of the service provider, and whether a judgement is made by human moderators, automated systems or processes, or a combination of both. (The illegality duty applies not just to large social media operators but to all 25,000 providers within the scope of the Bill.)

Ofcom will be required to provide guidance to service providers about making illegality judgements.

What does this mean for users? It is users, let us not forget, whose freedom of expression rights are at risk of being interfered with as a result of the illegality removal duty imposed on service providers. The duty can be characterised as a form of prior restraint.

The first significant point concerns unavailability to the provider of otherwise potentially relevant contextual information. If, from information reasonably available to the provider (which at least for automated systems may be only the content of the posts themselves and, perhaps, related posts), it appears that there are reasonable grounds to infer that an offence has been committed, that is enough. At least for automated real-time systems, the possibility that extrinsic information might put the post in a different light appears to be excluded from consideration, unless its existence and content can be inferred from the posts themselves.

Alternatively, for offences that are highly dependent on context (including extrinsic context) would there be a point at which a provider could (or should) conclude that there is too little information available to support a determination of reasonable grounds to infer?

Second, the difference between elements of the offence itself and an available defence may be significant. The possibility of a defence is to be ignored unless the provider has some basis in the information reasonably available to it on which to infer that a defence may be successful.

Take ‘reasonable excuse’. For the new harmful communications offence that the Bill would enact, lack of reasonable excuse is an element of the offence, not a defence. A provider could not conclude that the user’s post was illegal unless it had reasonable grounds to infer (on the basis of the information reasonably available to it) that there was no reasonable excuse.

By contrast, two offences under the Terrorism Act 200o, S.58 (collection of information likely to be of use to a terrorist) and 58A (publishing information about members of the armed forces etc likely to be of use to a terrorist)) provide for a reasonable excuse defence.

The possibility of such a defence is to be ignored unless the provider has reasonable grounds (on the basis of the information reasonably available to it) to infer that a defence may be successful.

The difference is potentially significant when we consider that (for instance) journalism or academic research constitutes a defence of reasonable excuse under S.58. Unless the material reasonably available to the provider (or its automated system) provides a basis on which to infer that journalism or academic research is the purpose of the act, the possibility of a journalism or academic research defence is to be ignored. (If, hypothetically, the offence had been drafted similarly to the harmful communications offence, so that lack of reasonable excuse was an element of the offence, then in order to adjudge the post as illegal the provider would have had to have reasonable grounds to infer that the purpose was not (for instance) journalism or academic research.)

NC14 was debated at Report Stage. The Opposition spokesperson, Alex Davies-Jones, said:

“The new clause is deeply problematic, and is likely to reduce significantly the amount of illegal content and fraudulent advertising that is correctly identified and acted on.”

“First, companies will be expected to determine whether content is illegal or fraudulently based on information that is “reasonably available to a provider”, with reasonableness determined in part by the size and capacity of the provider. That entrenches the problems I have outlined with smaller, high-risk companies being subject to fewer duties despite the acute risks they pose. Having less onerous applications of the illegal safety duties will encourage malign actors to migrate illegal activity on to smaller sites that have less pronounced regulatory expectations placed on them.”

If information ‘reasonably available to a provider” is insufficiently stringent, what information should a provider be required to base its decision upon? Should it guess at information that it does not have, or make assumptions (which would trespass into arbitrariness)?

In truth it is not so much NC14 itself that is deeply problematic, but the underlying assumption (which NC14 has now exposed) that service providers are necessarily in a position to determine illegality of user content, especially where real time automated filtering systems are concerned.

Alex Davies-Jones went on:

“That significantly raises the threshold at which companies are likely to determine that content is illegal.”

We might fairly ask: raises the threshold compared with what? The draft Bill defined illegal user content as “content where the service provider has reasonable grounds to believe that use or dissemination of the content amounts to a relevant criminal offence.” That standard (which would inevitably have resulted in over-removal) was dropped from the Bill as introduced into Parliament, leaving it unclear what standard service providers were to apply.

The new Online Safety Minister (Damian Collins) said:

“The concern is that some social media companies, and some users of services, may have sought to interpret the criminal threshold as being based on whether a court of law has found that an offence has been committed, and only then might they act. Actually, we want them to pre-empt that, based on a clear understanding of where the legal threshold is. That is how the regulatory codes work. So it is an attempt, not to weaken the provision but to bring clarity to the companies and the regulator over the application.”

In any event, if the Opposition were under the impression that prior to NC14 the threshold in the Bill was lower than ‘reasonable grounds to infer’, what might that standard be? If service providers were obliged to remove user content on (say) a mere suspicion of possible illegality, does that sufficiently protect legal online speech? Would a standard set that low comply with the UK’s ECHR obligations, to which – whatever this government’s view of the ECHR may be - the Opposition is committed? Indeed it is sometimes said that the standard set by the ECHR is manifest illegality.

It bears emphasising that these issues around an illegality duty should have been obvious once an illegality duty of care was in mind: by the time of the April 2019 White Paper, if not before. Yet only now are they being given serious consideration.

It is ironic that in the 12 July Commons debate the most perceptive comments about how service providers are meant to comply with the illegality duty were made by Sir Jeremy Wright QC, the former Culture Secretary who launched the White Paper in April 2019. He said:

“When people first look at this Bill, they will assume that everyone knows what illegal content is and therefore it should be easy to identify and take it down, or take the appropriate action to avoid its promotion. But, as new clause 14 makes clear, what the platform has to do is not just identify content but have reasonable grounds to infer that all elements of an offence, including the mental elements, are present or satisfied, and, indeed, that the platform does not have reasonable grounds to infer that the defence to the offence may be successfully relied upon.

That is right, of course, because criminal offences very often are not committed just by the fact of a piece of content; they may also require an intent, or a particular mental state, and they may require that the individual accused of that offence does not have a proper defence to it.

The question of course is how on earth a platform is supposed to know either of those two things in each case. This is helpful guidance, but the Government will have to think carefully about what further guidance they will need to give—or Ofcom will need to give—in order to help a platform to make those very difficult judgments.”

Why did the government not address this fundamental issue at the start, when a full and proper debate about it could have been had?

This is not the only aspect of the Online Safety Bill that could and should have been fully considered and discussed at the outset. If the Bill ends up being significantly delayed, or even taken back to the drawing board, the government has only itself to blame.