Friday 25 November 2022

How well do you know the Online Safety Bill?

With the Online Safety Bill returning to the Commons next month, this is an opportune moment to refresh our knowledge of the Bill.  The labels on the tin hardly require repeating: children, harm, tech giants, algorithms, trolls, abuse and the rest. But, to beat a well-worn drum, what really matters is what is inside the tin. 

Below is a miscellany of statements about the Bill: familiar slogans and narratives, a few random assertions, some that I have dreamed up to tease out lesser-known features. True, false, half true, indeterminate? Click on the expandable text to find out.  

The Bill makes illegal online what is illegal offline.
No. We have to go a long way to find a criminal offence that does not already apply online as well as offline (other than those such as driving a car without a licence, which by their nature can apply only to the physical world). One of the few remaining anomalies is the paper-only requirement for imprints on election literature – a gap that will be plugged when the relevant provisions of the Elections Act 2022 come into force.

Moreover, in its fundamentals the Bill departs from the principle of online-offline equivalence. Its duties of care are extended in ways that have no offline comparable. It creates a broadcast-style Ofcom regulatory regime that has no counterpart for individual speech offline: regulation by discretionary regulator rather than by clear, certain, general laws.

The real theme underlying the Bill is far removed from offline-online equivalence. It is that online speech is different from offline: more reach, more persistent, more dangerous and more in need of a regulator’s controlling hand.

Under the Bill's safety duty, before removing a user's post a platform will have to be satisfied to the criminal standard that it is illegal.
No. The current version of the Bill sets ‘reasonable grounds to infer’ as the platform’s threshold for adjudging illegality.

Moreover, unlike a court that comes to a decision after due consideration of all the available evidence on both sides, a platform will be required to make up its (or its algorithms') mind about illegality on the basis of whatever information is available to it, however incomplete that may be. For proactive monitoring of ‘priority offences’, that would be the user content processed by the platform’s automated filtering systems. The platform would also have to ignore the possibility of a defence unless they have reasonable grounds to infer that one may be successfully relied upon.

The mischief of a low threshold is that legitimate speech will inevitably be suppressed at scale under the banner of stamping out illegality. In a recent House of Lords debate Lord Gilbert, who chaired the Lords Committee that produced a Report on Freedom of Expression in the Digital Age, asked whether the government had considered a change in the standard from “reasonable grounds to believe” to “manifestly illegal”.  The government minister replied by referring to the "reasonable grounds to infer" amendment, which he said would protect against both under-removal and over-removal of content.

The Bill will repeal the S.127 Communications Act 2003 offences.
Half true. Following a recommendation by the England and Wales Law Commission the Bill will replace both S.127 (of Twitter Joke Trial notoriety) and the Malicious Communications Act 1988 with new offences, notably sending a harmful communication.

However, the repeal of S.127 is only for England and Wales. S.127 will continue in force in Scotland. As a result, for the purposes of a platform’s illegality safety duty the Bill will deem the remaining Scottish S.127 offence to apply throughout the UK. So in deciding whether it has reasonable grounds to infer illegality a platform would have to apply both the existing S.127 and its replacement. [Update: the government announced on 28 November 2022 that the 'grossly offensive' offences under S.127(1) and the MCA 1988 will no longer be repealed, following its decision to drop the new harmful communications offence.] 

A platform may be required to adjudge whether a post causes spiritual injury.
True. The National Security Bill will create a new offence of foreign interference. One route to committing the offence involves establishing that the conduct involves coercion. An example of coercion is given as “causing spiritual injury to, or placing undue spiritual pressure on, a person”.

The new offence would be designated as a priority offence under the Online Safety Bill, meaning that platforms would have to take proactive steps to prevent users encountering such content.

A platform may be required to adjudge whether a post represents a contribution to a matter of public interest.
True. The new harmful communications offence (originating from a recommendation by the Law Commission) provides that the prosecution must prove, among other things, that the sender has no reasonable excuse for sending the message. Although not determinative, one of the factors that the court must consider (if it is relevant in a particular case) is whether the message is, or is intended to be, a contribution to a matter of public interest.

A platform faced with a complaint that a post is illegal by virtue of this offence would be put in the position of making a judgment on public interest, applying the standard of whether it has reasonable grounds to infer illegality. During the Commons Committee stage the then Digital Minister Chris Philp elaborated on the task that a platform would have to undertake. It would, he said, perform a "balancing exercise" in assessing whether the content was a contribution to a matter of public interest. [Update: the government announced on 28 November 2022 that the proposed new harmful communications offence will be dropped.]

The House of Lords Communications and Digital Committee Report on Freedom of Speech in the Digital Age contains the following illuminating exchange: 'We asked the Law Commission how platforms’ algorithms and content moderators could be expected to identify posts which would be illegal under its proposals. Professor Lewis told us: “We generally do not design the criminal law in such a way as to make easier the lives of businesses that will have to follow it.”' However, it is the freedom of speech of users, not businesses, that is violated by the arbitrariness inherent in requiring platforms to adjudge vague laws.

Platforms would be required to filter users’ posts.
Highly likely, at least for for some platforms. All platforms would be under a duty to take proportionate proactive steps to prevent users encountering priority illegal content, and (for services likely to be accessed by children) to prevent children from encountering priority content harmful to children. The Bill gives various examples of such steps, ranging from user support to content moderation, but the biggest clues are in the Code of Practice provisions and the enforcement powers granted to Ofcom.

Ofcom is empowered to recommend in a Code of Practice (if proportionate for a platform of a particular kind or size) proactive technology measures such as algorithms, keyword matching, image matching, image classification or behaviour pattern detection in order to detect publicly communicated content that is either illegal or harmful to children. Its enforcement powers similarly include use of proactive technology. Ofcom would have additional powers to require accredited proactive technology to be used in relation to terrorism content and CSEA (including, for CSEA, in relation to private messages).

The Bill regulates platforms, not users.
False dichotomy. The Bill certainly regulates platforms, but does so by pressing them into service as proxies to control content posted by users. The Bill thus regulates users at one remove. It also contains new criminal offences that would be committed directly by users.

The Bill outlaws hurting people's feelings.
No, but the new harmful communications offence comes close. It would criminalise sending, with no reasonable excuse, a message carrying a real and substantial risk that it would cause psychological harm - amounting to at least serious distress - to a likely member of the audience, with the intention of causing such harm. There is no requirement that the response of a hypothetical seriously distressed audience member should be reasonable. One foreseeable hypersensitive outlier is enough. Nor is there any requirement to show that anyone was actually seriously distressed.

The Law Commission, which recommended this offence, considered that it would be kept within bounds by the need to prove intent to cause harm and the need to prove lack of reasonable excuse, both to the criminal standard. However, the standard to which platforms will operate in assessing illegality is reasonable grounds to infer[Update: the government announced on 28 November 2022 that the proposed new harmful communications offence will be dropped.]

The Bill also refers to psychological harm in other contexts, but without defining it further. The government intends that psychological harm should not be limited to a medically recognised condition.

The Bill recriminalises blasphemy.
Quite possibly. Blasphemy was abolished as a criminal offence in England and Wales in 2008 and in Scotland in 2021. The possible impact of the harmful communications offence (see previous item) has to be assessed against the background that people undoubtedly exist who experience serious distress (or at least claim to do so) upon encountering content that they regard as insulting to their religion. [Update: the government announced on 28 November 2022 that the proposed new harmful communications offence will be dropped.]

The Bill is all about Big Tech and large social media companies.
No. Whilst the biggest “Category 1” services would be subject to additional obligations, the Bill’s core duties would apply to an estimated 25,000 UK service providers from the largest to the smallest, and whether or not they are run as businesses. That would include, for instance, discussion forums run by not-for-profits and charities. Distributed social media instances operated by volunteers also appear to be in scope.

The Bill is all about algorithms that push and amplify user content.
No. The Bill makes occasional mention of algorithms, but the core duties would apply regardless of whether a platform makes use of algorithmic curation. A plain vanilla discussion forum is within scope.

The Secretary of State can instruct Ofcom to modify its Codes of Practice.
True. Section 40 of the Bill empowers the Secretary of State to direct OFCOM to modify a draft code of practice if the Secretary of State believes that modifications are required (a) for reasons of public policy, or (b) in the case of a terrorism or CSEA code of practice, for reasons of national security or public safety. The Secretary of State can keep sending the modified draft back for further modification.

A platform will be required to remove content that is legal but harmful to adults.
No. The legal but harmful to adults duty (should it survive in the Bill) applies only to Category 1 platforms and on its face only requires transparency. Some have argued that its effect will nevertheless be heavily to incentivise Category 1 platforms to remove such content. [Update: the government announced on 28 November 2022 that the legal but harmful to adults duty will be dropped.]

The Bill is about systems and processes, not content moderation.
False dichotomy. Whilst the Bill's illegality and harm to children duties are couched in terms of systems and processes, it also lists measures that a service provider is required to take or use to fulfil those duties, if it is proportionate to do so. Content moderation, including taking down content, is in the list. It is no coincidence that the government’s Impact Assessment estimates additional moderation costs over a 10 year period at nearly £2 billion.

Ofcom could ban social media quoting features.
Indeterminate. Some may take the view that enabling social media quoting encourages toxic behaviour (the reason why the founder of Mastodon did not include a quote feature). A proponent of requiring more friction might argue that it is the kind of non-content oriented feature that should fall within the ‘safety by design’ aspects of a duty of care - an approach that some regard as preferable to moderating specific content.

Ofcom deprecation of a design feature would have to be tied to some aspect of a safety duty under the Bill and perhaps to risk of physical or psychological harm. There would likely have to be evidence (not just an opinion) that the design feature in question contributes to a relevant kind of risk within the scope of the Bill. From a proportionality perspective, it has to be remembered that friction-increasing proposals typically strike at all kinds of content: illegal, harmful, legal and beneficial.  

Of course the Bill does not tell us which design features should or should not be permitted. That is in the territory of the significant discretion (and consequent power) that the Bill places in the hands of Ofcom. If it were considered to be within scope of the Bill and proportionate to deprecate a particular design feature, in principle Ofcom could make a recommendation in a Code of Practice. That would leave it to the platform either to comply or to explain how it satisfied the relevant duty in some other way. Ultimately Ofcom could seek to invoke its enforcement powers.

The Bill will outlaw end to end encryption.
Not as such, but... . Ofcom will be given the power to issue a notice requiring a private messaging service to use accredited technology to scan for CSEA material. A recent government amendment to the Bill provides that a provider given such a notice has to make such changes to the design or operation of the service as are necessary for the technology to be used effectively. That opens the way to requiring E2E encryption to be modified if it is incompatible with the accredited technology - which might, for instance, involve client-side scanning.  Ofcom can also require providers to use best endeavours develop or source their own scanning technology.

The government’s response to the Pre-legislative Scrutiny Committee is also illuminating: “End-to-end encryption should not be rolled out without appropriate safety mitigations, for example, the ability to continue to detect known CSEA imagery.” 

The press are exempt.
True up to a point, but it’s complicated.

First, user comments under newspaper and broadcast stories are intended to be exempt as ‘limited functionality’ under Schedule 1 (but the permitted functionality is extremely limited, for instance apparently excluding comments on comments).

Second, platforms' safety duties do not apply to recognised news publisher content appearing on their services. However, many news and other publishers will fall outside the exemption. 

Third, various press and broadcast organisations are exempted from the new harmful and false communications offences created by the Bill. 
[Update: the government announced on 28 November 2022 that the proposed new harmful communications offence will be dropped.]

[Updated 3 December 2022 to take account of the government announcement on 28 November 2022.]

Wednesday 2 November 2022

On the Dotted Line

The topic of electronic signatures seems cursed to eternal life. In the blue corner we have the established liberal English law approach to signatures, which eschews formality and emphasises intention to authenticate. In the red corner we have preoccupation with verifying identity of the signatory, with technically engineered digital signatures and with the EU’s eIDAS hierarchy of qualified, advanced and ordinary electronic signatures.

In the English courts the blues have it. Judges have upheld the validity of electronic signatures as informal as signing a name at the end of an e-mail or even, in one case, clicking an ‘I accept’ button on an electronic form. They have been able to do this partly because, with very few exceptions, the England and Wales legislature has refrained from stipulating use of an eIDAS-compliant qualified or advanced signature as a condition of validity. The EIDAS hierarchy does form part of our law, but – rather like the Interpretation Act - in the guise of a toolkit that is available to be used or not as the legislature wishes. The toolkit has for the most part remained on the legislative shelf.

The potential consequences of stipulating eIDAS-style formalities in legislation are graphically illustrated by the Austrian case of the Wrong Kind of Signature. A €3bn contract to supply double-decker trains to Austrian Federal Railways was invalidated because the contract was signed with a qualified electronic signature supported by a Swiss, rather than an EU, Trusted Service Provider.

The modern English law aversion to imposition of formalities was pithily encapsulated in an official committee report of 1937, describing the Statute of Frauds:

““'The Act', in the words of Lord Campbell . . . 'promotes more frauds than it prevents'. True it shuts out perjury; but it also and more frequently shuts out the truth. It strikes impartially at the perjurer and at the honest man who has omitted a precaution, sealing the lips of both. Mr Justice FitzJames Stephen ... went so far as to assert that 'in the vast majority of cases its operation is simply to enable a man to break a promise with impunity, because he did not write it down with sufficient formality.’ ”

For its part eIDAS continues to complicate and confound. February’s Interim Report of the Industry Working Group on the Electronic Execution of Documents, running to 94 pages of discussion, stated that ‘only’ qualified electronic signatures have equivalent legal status to handwritten signatures (meaning, according to the Report, that they carry a presumption of authenticity). Yet while eIDAS does require equivalent legal effect (whatever that may mean) to be accorded to qualified signatures, it does not require other kinds of electronic signature to be denied that status; nor has English domestic law done so.

Back in the courts, a recent decision of Senior Costs Judge Gordon-Saker in Elias v Wallace LLP [2022] EWHC 2574 (SCCO) continues down the road of upholding the validity of informal electronic signatures. Under the Solicitors Act 1974 (as amended) a solicitor’s bill cannot be enforced by legal proceedings unless it complies with certain formalities, including that it has to be:

“(a) signed by the solicitor or on his behalf by an employee of the solicitor authorised by him to sign, or

(b) enclosed in, or accompanied by, a letter which is signed as mentioned in paragraph (a) and refers to the bill.”

The Act states that the signature may be an electronic signature. It takes its definition of electronic signature from s.7(2) of the Electronic Communications Act 2000[1], as amended:  

“… so much of anything in electronic form as –

(a)   is incorporated into or otherwise logically associated with any electronic communication or electronic data; and

(b)   purports to be used by the individual creating it to sign.”

This is an unusual example of English legislation stipulating compliance with a defined kind of signature (albeit that S.7(2) is framed in very broad terms) as a condition of validity. Most legislation requiring a signature goes no further than a generally stated requirement that the document must be signed[2].

The bills in question were sent to the solicitor’s client as e-mail attachments. The bills themselves were not signed, but the covering e-mails concluded with the words:

“Best regards,

Alex

[first name and surname]

Partner

[telephone numbers, firm name and physical and website addresses]”.

The judge held:

  1. The printed name of the firm incorporated in the invoice, like a letterheading, was not a signature. This unsurprising conclusion is reminiscent of Mehta v J Pereira Fernandes SA [2006] EWHC 813 in which the same was held for an e-mail address appearing at the top of an e-mail.
  2. If the name ‘Alex’ was not generated automatically, clearly it purported to be used as a signature.
  3. If the name ‘Alex’ was auto-generated, then on the authority of Neocleous v Rees that would constitute a signature. The e-mail footer was clearly applied with authenticating intent, even if it was the product of a rule.

The judge also held that ‘letter’ should be interpreted to include e-mail. That is a salutary reminder that the ability to conduct a transaction electronically may not be only a question of whether electronic signatures are permissible. Other requirements of form and process can also come into play.

[1] Note that the role of S.7 was to make explicit (almost certainly unnecessarily) that electronic signatures as defined by the section were admissible as evidence, whereas the Solicitors Act provision concerns substantive validity.

[2] As to which, see the England and Wales Law Commission’s Statement of the Law in its Report on Electronic Execution of Documents (2019).