Tuesday, 15 October 2019

Whose e-signature is it anyway?

Last month the Law Commission issued its Report on Electronic Execution of Documents. The report reinforced the Commission's view, first expressed in 2001, that electronic signatures are generally capable of satisfying a statutory requirement for a signature. That applies to informal signatures such as typing a name in an e-mail, as well as to sophisticated digital signatures.

The report includes a high-level statement of the law, designed to reassure those who still harbour residual doubts about the validity of electronic signatures. Inevitably, however, the statement is subject to some important qualifications. These are not unique to electronic signatures. They have always applied to traditional wet ink signatures. Thus:

1.       The person signing the document must intend to authenticate it. Not every insertion of a name, or ticking a box or clicking on a button, is intended to be a signature.
2.      Any applicable execution formalities must be satisfied. These can be of various types, from witnessing to a stipulated form of signature.  
So if a statute specifies that a document can be signed electronically only by typing a name in a box, then ticking an online box or clicking on a button will not be a valid signature. If the statute specifies that the signature must be in a particular place in a form, then typing a signature somewhere else in the form will not suffice.
Other restrictions may also be important. Thus if a statute stipulates that the document must be signed by a person, someone else cannot sign on that person's behalf.
As often as not, disputes about whether a document has been validly signed with an electronic signature are not about electronic form versus wet ink as such, but about compliance with surrounding formalities and stipulations.

My 2017 article 'Can I Use an Electronic Signature?' identified five steps in analysing whether an electronic signature can be used:


1.       Is there an applicable formality requirement of signature, medium, form or process? 

2.      In principle can electronic form and signature comply with any applicable formality requirements?

3.     In the light of 1 and 2, what particular kinds of signature, medium, form or process can in principle be used?
4.     For a given solution and process, how strong would be the evidence of compliance with any applicable formality requirement? Is the risk acceptable?
5.     For a given solution and process, how strong would be the evidence of signature (i.e. who signed what, how, when and – if relevant – where)? Is the risk acceptable?
So although the Law Commission has provided helpful reassurance about the use of electronic signatures, electronic signatures require no less thought about how they are used — in particular how to comply with any applicable statutory formalities — than their counterparts in the wet ink world.

A salutary tale
The August 2018 Birmingham County Court decision in Kassam v Gill is a salutary tale, illustrating the care that still has to be taken when complying with signature formalities in the electronic environment.

The case contains lessons not only for users, but also for those designing an online process intended to enable users to comply with stipulated formalities.

Kassam v Gill involved Possession Claims OnLine, the facility provided by the Court Service that enables claimants to draft and issue property repossession proceedings online.

As with any English legal proceedings, the claim form and particulars of claim must be verified by a statement of truth: a statement declaring that the claimant believes that the facts stated are true.

The court rules (CPR 22.1(6)) stipulate, as a general rule, that the statement of truth must be "signed by the party, their litigation friend or legal representative". 

CPR Practice Direction PD55B provides a PCOL-specific rule, presumably intended to dispel any possible uncertainty about how, in a dedicated online system, a statement of truth would be signed.

PD55B states:
"any CPR provision which requires a document to be signed by a person is satisfied by that person entering his name on an online form".
The general signature requirement of CPR 22.1(6) is therefore deemed to be satisfied if the PCOL-specific formality in PD55B is complied with.

Significantly, both CPR 22.1(6) and PD55B stipulate that the stated actions must be “by” — not “by or on behalf of” — the party.

The PCOL system

The most obvious way for PCOL to implement PD55B might have been for the system to provide a box next to the statement of truth into which claimants (in this case a Mr and Mrs Gill) would enter their names by typing them.

However the system was not designed in that way. The only point at which a claimant could type their name into a form would be when initially registering on the system.

When a claimant subsequently went through the online steps to draft, generate and issue a claim form and statement of case, towards the end of the process they would encounter a screen summarising the claim form information and setting out the statement of truth. The claimant would have to tick a box on that screen next to the statement of truth. The system would populate the 'Claimant details' box in that screen with the pre-entered registration information. However the claimant's name would not be shown in or next to the statement of truth box itself, either before or after ticking the box.
The arrangement of that screen can be seen below [1]. The first screenshot is before ticking the box, the second is after. The only difference between the two is the addition of the tick in the tick box.














Later on in the process, once the claimant had paid the fee and finally requested issue of the proceedings, the system would generate the Claim Form and Particulars of Claim as a PDF document.

It appears that at that point the system would use the pre-entered registration details automatically to populate the 'Signed' and 'Full name' boxes under the Statement of Truth in the final PDF. The judgment in Kassam v Gill sets out the wording of the PDF generated in that case:

"Statement of Truth
I believe that the facts stated in this form are true
Signed MrMrs Harjit / Jagbir Gill date 09 May 2018
Claimant
Full name Mr/Mrs Harjit / Jagbir Gill"
Perhaps unsurprisingly, the judge commented about the automatic name insertion system: "No doubt that makes the form less time consuming to complete, but it is not a happy fit with the requirements of the rules or the [Practice Direction]."

Was PD55B satisfied in this case?

In the case before the court the names 'Mr/Mrs Harjit / Jagbir Gill' were entered into the system upon registration not by Mr or Mrs Gill but, in advance of a meeting with them, by an agent whom they had retained to assist them with recovering possession of their property. The agent was neither a legal representative nor a litigation friend (either of whom under CPR 22.1(6) could have signed on behalf of the claimants).

At the meeting the agent took Mr Gill through the forms. Mr Gill, with the approval of Mrs Gill, ticked the online Statement of Truth box on behalf of both of them. 

In these circumstances, was the Statement of Truth "signed by the party, their litigation friend or legal representative" as required by the CPR?

The judgment proceeded on the basis that what mattered was whether the PCOL-specific deemed signature provision in PD55B was satisfied. In other words, had the relevant person entered their name on an online form?

The design of the system meant that entry of the names on an online form could take place only at the initial registration stage. But that had been done by the agent, not Mr or Mrs Gill.

When, later in the process, an online form displayed the Statement of Truth, Mr Gill ticked the box next to the Statement of Truth. He did that with his wife’s agreement, but did not (and could not) enter either his own or his wife's name.

Nor did their names appear on the online entry form next to the Statement of Truth (see screenshots above). Those fields were populated only in the PDF, generated at the final stage once the request to issue proceedings had been made.

In those circumstances the judge held that the signature rules were not satisfied. The wrong person had entered the names at initial registration:

"This “signature” was applied by [the agent] when he entered the Claimants names on the online form. Mr Gill accepted that he did not enter his name on the form".
Since CPR Part 22(1)(6) required the statement of truth to be signed ‘by the party’ and PD55B required the name to be entered on the online form ‘by that person’, that action could not be taken by some other person on the party’s behalf (unless by a litigation friend or legal representative).


A requirement that a signature be applied ‘by’, rather than ‘by or on behalf of’, someone is a classic potential pitfall. In 1930, in Re Prince Blücher, it invalidated delegation of signing to a solicitor. Today it can invalidate delegation of an electronic signature[2a].

In Kassam v Gill the judge went on to find that the failure to comply with the signature requirement did not justifying striking out the claim on that ground. Nevertheless, the case well illustrates the important point that even where an electronic signature is capable in principle of satisfying a statutory requirement for a signature, attention must be paid to other statutory requirements that may affect the validity of the signature.
  
Neocleous v Rees
There continues to be judicial support for the Law Commission's positive view of the general validity of electronic signatures. An example is the recent case of Neocleous v Rees, in which a solicitor sent an e-mail offering terms for the sale of land by his client. He finished ‘Many thanks’, then when he pressed ‘send’ the firm’s e-mail system automatically appended his name and contact details to the foot of the e-mail. The other side accepted the offer.

Since this was a sale of land, the court had to consider whether the offer was “signed by or on behalf” of the solicitor’s client, as required by the Law of Property (Miscellaneous Provisions) Act 1989.

The judge held that the statute was satisfied. He adopted the approach of the Law Commission that what mattered was whether the name was applied "with authenticating intent". The name and contact details had consciously been entered into the firm’s e-mail settings by a person at some stage. The solicitor knew that the system would automatically append his name and contact details. Indeed the judge found that writing ‘Many thanks’ suggested that he was relying on that happening.

In those circumstances it was difficult to distinguish that process from manually adding the name each time an e-mail was sent. The recipient would have no way of knowing whether the details had been added manually or automatically. Objectively, the presence of the name indicated a clear intention to associate oneself with the e-mail – to authenticate or sign it.

While English law is generally accommodating towards the use of electronic signatures (even relatively informal signatures) any specific applicable formalities (such as the deemed signature provision in Kassam v Gill) cannot be ignored.

Often, the most relevant constraints — whether as to medium, form process — do not relate to the form of signature itself, but to surrounding formalities. For instance, the Law Commission’s view in its Report is that a requirement that a document be signed in the presence of a witness cannot be satisfied by someone observing remotely by video link.

To return to the illustration provided by Kassam v Gill, it may be critical that the right person takes the right step, whether in a dedicated platform such as PCOL or on a general purpose signing platform. A busy executive who delegates use of a signing device or procedure to an assistant could fall foul of a legal requirement that the executive’s signature be applied personally – not because the signature is in electronic form, but because the wrong person applied it.

Epilogue: tweaking Kassam v Gill

It is instructive to contemplate two variations on Kassam v Gill: first, if the facts had been a little different; and second if, instead of the deemed "entry of names on an online form" provision in PD55B, the court rules had consisted only of the general signature requirement in CPR 22.1(6): that the Statement of Truth be signed by a party.

(1)  Different facts

What if Mr or Mrs Gill, rather than the agent, had entered the claimant names at registration time? Even in this scenario only one or other of Mr or Mrs Gill, not both claimants, would have entered the names at registration.

If, as the judge found, entering the names at registration was the deemed signature, how could it be said that both claimants had done what was required by PD55B?

(2) No PD55B
What if PD55B did not exist, and only the general signature requirement of CPR 22.1(6) had applied: i.e. that the Statement of Truth must be signed by a party? How might CPR 22.1(6) have mapped on to the PCOL process, assuming for simplicity a single claimant rather than the two claimants in Kassam v Gill? [2]
Mapping the rule on to the process

A mapping analysis can be broken down as follows:
(1)   What constitutes the document to be signed?
(2)  What step(s) constitute applying the signature?
(3)  Who has taken the step(s) that constitute applying the signature?
(4)  If the statutory or other requirement does not permit delegation of authority to sign, has the right person applied the signature?
What is the document to be signed?
An analysis of a statutory or other applicable general signature requirement has to start by asking what constitutes the document to be signed. There must be a document. For legal purposes [3] the notion of ‘signed’ without a document is as challenging a concept as the smile without the Cheshire Cat.
However, “document” does not necessarily imply paper. Generally speaking a document can be any kind of recorded information, including (say) entries in a database. Electronic data constituting a document may be signed by means of other data logically associated with it.
In the case of PCOL the document to be signed is the Statement of Truth verifying the Claim Form and the Particulars of Claim. The PCOL process includes the statement of truth in the Claim Form and Particulars of Claim themselves.
It therefore seems likely that for PCOL the document to be signed would be the Statement of Truth contained in the Claim Form and Particulars of Claim. Those documents, it should be re-iterated, are generated as PDFs at the end of the PCOL process when the instruction to issue proceedings is given by pressing the appropriate online button after payment has been made.
Document v entry form We can already see a significant difference between the general signature requirement of CPR 22.1(6) and the deemed signature provision of PD55B. The general signature requirement focuses on signing the relevant document, whereas PD55B focuses on entry of names on an online form.
For the purpose of CPR 22.1(6), considered without PD55B, it is difficult to see how any of the data entry forms in the PCOL process could realistically be regarded as constituting the document to be signed. Their purpose is to gather the data required to generate the relevant documents at the end of the process: the Claim Form and Particulars of Claim, including the signed Statement of Truth for each document.
What step(s) would constitute applying the signature?
If that analysis is right, which of the various steps involved in the PCOL process could be regarded as signing the Statement of Truth under the general provisions of CPR 22.1(6)?

The candidates are:

1.    Entry of names at initial registration.

2.      Ticking the Statement of Truth verification box.

3.      Pressing the final button requesting issue of proceedings.
4.      Some combination or cumulative result of the above.
Once divorced from the deemed signature provision of PD55B, with its specific focus on entering names into an online form, the analysis of such a process in terms of the general law of signatures could be more flexible.
It could be less significant that ticking the Statement of Truth verification box did not itself insert a name into the online form, since it might be possible to regard the overall process as amounting to a signature: the automated insertion of the pre-entered name next to the Statement of Truth in the finally generated PDF documents, verified by the ticking of the box at an earlier stage in the process.
More radically, it might be possible to regard the ticking of the box alone — an act not dependent on names having been inserted in any online data entry form at any stage — as signing the Statements of Truth produced on the ultimately generated PDF forms. Clicking the tick box could be regarded as itself communicating an intention to authenticate the ultimately generated documents (by analogy with Bassano vToft at paragraphs [43] to [45]). Such an analysis might avoid signature issues arising if the names had been entered on registration by someone other than the claimant.
Who has taken the step(s)?
Even then, in the case of two claimants could one claimant's click on a single tick box amount to signature by both claimants? Would both claimants have to join hands together to wield the mouse and tick the box in unison?
Has the right person taken the step(s)?

Furthermore, what if (for instance) the final step in the process – the click on the button to instruct the system to issue the proceedings - were executed by an agent (other than a legal representative or litigation friend), not by the claimant themself? If that is the step in the process that actually generates the document required to be signed, could it be said that the document was signed by the claimant? Would all the steps in the process have had to be taken by the claimant personally?

These would not necessarily be easy questions. They, and the example of PD55B, all illustrate the importance where a dedicated online process is concerned of matching the process to applicable statutory or other requirements.
Footnotes:
[1] The screenshots are from a dummy session that I conducted in October 2019 to observe the PCOL system. The session was terminated before payment of a fee. The arrangement appears to be the same as that described in the judgment.
[2] This is a hypothetical discussion for the purposes of illustration. If the deemed signature provisions of PD55B constitute a complete code for signature under the PCOL process, the general signature provisions of CPR 22.1(6) would take second place to PD55B.
[2a] In 2006 the Privy Council in Whitter v Frankson said that it would be the exception rather than the rule to construe 'by' as precluding the possibility of delegation: "The general principle is that when a statute gives someone the right to invoke some legal procedure by giving a notice or taking some other formal step, he may either do so in person or authorise someone else to do it on his behalf. Qui facit per alium facit per se." Indeed it held that Prince Blücher was wrongly decided. Nevertheless, there can be exceptional cases and it is ultimately a matter of construction of each statute or other rule whether that is permissible.
[3] It makes sense to talk of ‘my signature’ as something that can potentially be applied to a document in order to sign it. We can speak of providing a sample of ‘my signature’. However a signature in that sense has no legal consequence unless or until it is used to authenticate a document. At that point we can speak in legal terms of the document having been signed and of it bearing a signature.

[Updated 18 October 2019 to add footnote 2a and 'potential' pitfall. With thanks to Nicholas Bohm for drawing my attention to Whitter v Frankson.]

Friday, 28 June 2019

Speech is not a tripping hazard - response to the Online Harms White Paper


My submission to the Online Harms White Paper Consultation.




Errata as noted in document corrected 29 June 2019.
Reference to Al-Najar and Others v Cumberland Hotel (London) Ltd
[2019] EWHC 1593 (QB) added 1 July 2019.


Sunday, 5 May 2019

The Rule of Law and the Online Harms White Paper

Before the publication of the Online Harms White Paper on 8 April 2019 I proposed a Ten Point Rule of Law test to which it might usefully be subjected.

The idea of  the test is less to evaluate the substantive merits of the government’s proposal – you can find an analysis of those here – but more to determine whether it would satisfy fundamental rule of law requirements of certainty and precision, without which something that purports to be law descends into ad hoc command by a state official.

Here is an analysis of the White Paper from that perspective. The questions posed are whether the White Paper demonstrates sufficient certainty and precision in respect of each of the following matters.
1.    Which operators are and are not subject to the duty of care
The White Paper says that the regulatory framework should apply to “companies that allow users to share or discover user-generated content, or interact with each other online.”
This is undoubtedly broad, but on the face of it is reasonably clear.  The White Paper goes on to provide examples of the main types of relevant service:
-             Hosting, sharing and discovery of user-generated content (e.g. a post on a public forum or the sharing of a video).
-             Facilitation of public and private online interaction between service users (e.g. instant messaging or comments on posts).
However these examples introduce a significant element of uncertainty. Thus, how broad is ‘facilitation’? The White Paper gives a clue when it mentions ancillary services such as caching. Yet it is difficult to understand the opening definition as including caching.
The White Paper says that the scope will include “social media companies, public discussion forums, retailers that allow users to review products online, along with non-profit organisations, file sharing sites and cloud hosting providers.”  In the Executive Summary it adds messaging services and search engines into the mix. Although the White Paper does not mention them, online games would clearly be in scope as would an app with social or discussion features.
Applicability to the press is an area of significant uncertainty. Comments sections on newspaper websites, or a separate discussion forum run by a newspaper such as in the Karim v Newsquest case would on the face of it be in scope. However, in a letter to the Society of Editors the Secretary of State has said:
“… as I made clear at the White Paper launch and in the House of Commons, where these services are already well regulated, as IPSO and IMPRESS do regarding their members' moderated comment sections, we will not duplicate those efforts. Journalistic or editorial content will not be affected by the regulatory framework.”
This exclusion is nowhere stated in the White Paper. Further, it does not address the fact that newspapers are themselves users of social media. They have Facebook pages and Twitter accounts, with links to their own websites. As such, their own content is liable to be affected by a social media platform taking action to suppress user content in performance of its duty of care.
The verdict on this section might have been ‘extremely broad but clearly so’. However the uncertainty introduced by ‘facilitation’, and by the lack of clarity about newspapers, results in a FAIL.
2.      To whom the duty of care is owed
The answer to this appears to be ‘no-one’. That may seem odd, especially when Secretary of State Jeremy Wright referred in a recent letter to the Society of Editors to “a duty of care between companies and their users”, but what is described in the White Paper is not in fact a duty of care at all.
The proposed duty would not provide users with a basis on which to make a damages claim against the companies for breach, as is the case with a common law duty of care or a statutory duty of care under, say, the Occupiers’ Liability Act 1957.
Nor, sensibly, could the proposed duty do so since its conception of harm strays beyond established duty of care territory of risk of physical injury to individuals, into the highly contestible region of speech harms and then on into the unmappable wilderness of harm to society.
Thus in its introduction to the harms in scope the White Paper starts by referring to online content or activity that ‘harms individual users’, but then goes on: “or threatens our way of life in the UK, either by undermining national security, or by reducing trust and undermining our shared rights, responsibilities to foster integration.”
In the context of disinformation it refers to “undermining our respect and tolerance for each other and confusing our understanding of what is happening in the wider world.”
Whatever (if anything) these abstractions may mean, they are not the kind of thing that can properly be made the subject of a legal duty of care in the offline world sense of the phrase.
The proposed duty of care is something quite different: a statutory framework giving a regulator discretion to decide what should count as harmful, what kinds of behaviour by users should be regarded as causing harm, what rules should be put in place to counter it, and which operators to prioritise.
From a rule of law perspective the answer to the question posed is that it does seem clear that the duty would be owed to no one. In that limited sense it probably rates a PASS, but only by resisting the temptation to change that to FAIL for the misdescription of the scheme as creating a duty of care.
Nevertheless, the fact that the duty is of a kind that is owed to no-one paves the way for a multitude of FAILs for other questions.
3.      What kinds of effect on a recipient will and will not be regarded as harmful
This is an obvious FAIL. The White Paper has its origins in the Internet Safety Strategy Green Paper, yet does not restrict itself to what in the offline world would be regarded as safety issues.  It makes no attempt to define harm, apparently leaving it up to the proposed Ofweb to decide what should and should not be regarded as harmful. Some examples given in the White Paper suggest that effect on the recipient is not limited to psychological harms, or even distress.
This lack of precision is exacerbated by the fact that the kinds of harm contemplated by the White Paper are not restricted to those that have an identifiable effect on a recipient of the information, but appear to encompass nebulous notions of harm to society.
4.      What speech or conduct by a user will and will not be taken to cause such harm
The answer appears to be, potentially, “any”. The WP goes beyond defined unlawfulness into undefined harm, but places no limitation on the kind of behaviour that could in principle be regarded as causing harm. From a rule of law perspective of clarity this may be a PASS, but only in the sense that the kind of behaviour in scope is clearly unlimited.
5.      If risk to a hypothetical recipient of the speech or conduct in question is sufficient, how much risk suffices and what are the assumed characteristics of the notional recipient
FAIL. There is no discussion of either of these points, beyond emphasising many times that children as well as adults should be regarded as potential recipients (although whether the duty of care should mean taking steps to exclude children, or to tailor all content to be suitable for children, or a choice of either, or something else, is unclear). The White Paper makes specific reference to children and vulnerable users, but does not limit itself to those.
6.      Whether the risk of any particular harm has to be causally connected (and if so how closely) to the presence of some particular feature of the platform
FAIL. The White Paper mentions, specifically in the context of disinformation, the much discussed amplification, filter bubble and echo chamber effects that are associated with social media. More broadly it refers to ‘safety by design’ principles, but does not identify any design features that are said to give rise to a particular risk of harm.
The safety by design principles appear to be not about identifying and excluding features that could be said to give rise to a risk of harm, but more focused on designing in features that the regulator would be likely to require of an operator in order to satisfy its duty of care.
Examples given include clarity to users about what forms of content are acceptable, effective systems for detecting and responding to illegal or harmful content, including the use of AI-based technology and trained moderators; making it easy for users to report problem content, and an efficient triage system to deal with reports.
7.      What circumstances would trigger an operator's duty to take preventive or mitigating steps
FAIL. The specification of such circumstances would left up to the discretion of Ofweb, in its envisaged Codes of Practice or, in the case of terrorism or child sexual exploitation and abuse, the discretion of the Home Secretary via approval of OfWeb’s Codes of Practice.
The only concession made in this direction is that the government is consulting on whether Codes of Practice should be approved by Parliament. However it is difficult to conclude that laying the detailed results of a regulator’s ad hoc consideration before Parliament for approval, almost certainly on a take it or leave it basis, has anything like the same democratic or constitutional force as requiring Parliament to specify the harms and the nature of the duty of care with adequate precision in the first place.
8.      What steps the duty of care would require the operator to take to prevent or mitigate harm (or a perceived risk of harm)
The White Paper says that legislation will make clear that companies must do what is reasonably practicable. However that is not enough to prevent a FAIL, for the same reasons as 7. Moreover, it is implicit in the White Paper section on Fulfilling the Duty of Care that the government has its own views on the kinds of steps that operators should be taking to fulfil the duty of care in various areas. This falls uneasily between a statutorily defined duty, the role of an independent regulator in deciding what is required, and the possible desire of government to influence an independent regulator.
9.      How any steps required by the duty of care would affect users who would not be harmed by the speech or conduct in question
FAIL. The White Paper does not discuss this, beyond the general discussion of freedom of expression in the next question.
10.   Whether a risk of collateral damage to lawful speech or conduct (and if so how great a risk of how extensive damage), would negate the duty of care
The question of collateral damage is not addressed, other than implicitly in the various statements that the government’s vision includes freedom of expression online and that the regulatory framework will “set clear standards to help companies ensure safety of users while protecting freedom of expression”.
Further, “the regulator will have a legal duty to pay due regard to innovation, and to protect users’ rights online, taking particular care not to infringe privacy or freedom of expression.” It will “ensure that the new regulatory requirements do not lead to a disproportionately risk averse response from companies that unduly limits freedom of expression, including by limiting participation in public debate.”
Thus consideration of the consequence of a risk of collateral damage to lawful speech it is left up to the decision of a regulator, rather than to the law or a court. The regulator will presumably, by the nature of the proposal, be able to give less weight to the risk of suppressing lawful speech that it considers to be harmful. FAIL.
Postscript It may said against much of this analysis that precedents exist for appointing a discretionary regulator with power to decide what does and does not constitute harmful speech.
Thus, for broadcast, the Communications Act 2003 does not define “offensive or harmful” and Ofcom is largely left to decide what those mean, in the light of generally accepted standards.
Whatever the view of the appropriateness of such a regime for broadcast, the White Paper proposals would regulate individual speech. Individual speech is different. What is a permissible regulatory model for broadcast is not necessarily justifiable for individuals, as was recognised in the US Communications Decency Act case (Reno v ACLU) in the early 1990s. The US Supreme Court found that:
“This dynamic, multi-faceted category of communication includes not only traditional print and news services, but also audio, video and still images, as well as interactive, real-time dialogue. Through the use of chat rooms, any person with a phone line can become a town crier with a voice that resonates farther than it could from any soapbox. Through the use of web pages, mail exploders, and newsgroups, the same individual can become a pamphleteer. As the District Court found, ‘the content on the internet is as diverse as human thought’ ... We agree with its conclusion that our cases provide no basis for qualifying the level of First Amendment scrutiny that should be applied to this medium.’
In these times it is hardly fashionable, outside the USA, to cite First Amendment jurisprudence. Nevertheless, the proposition that individual speech is not broadcast should carry weight in a constitutional or human rights court in any jurisdiction.

Thursday, 18 April 2019

Users Behaving Badly – the Online Harms White Paper

Last Monday, having spent the best part of a day reading the UK government's Online Harms White Paper, I concluded that if the road to hell was paved with good intentions, this was a motorway.

Nearly two weeks on, after full and further consideration, I have found nothing to alter that view. This is why.

The White Paper

First, a reminder of what the White Paper proposes. The government intends to legislate for a statutory ‘duty of care’ on social media platforms and a wide range of other internet companies that "allow users to share or discover user-generated content, or interact with each other online". This could range from public discussion forums to sites carrying user reviews, to search engines, messaging providers, file sharing sites, cloud hosting providers and many others. 

The duty of care would require them to “take more responsibility for the safety of their users and tackle harm caused by content or activity on their services”. This would apply not only to illegal content and activities, but also to lawful material regarded as harmful.

The duty of care would be overseen and enforced by a regulator armed with power to fine companies for non-compliance. That might be an existing or a new body (call it Ofweb).


Ofweb would set out rules in Codes of Practice that the intermediary companies should follow to comply with their duty of care. For terrorism and child sexual abuse material the Home Secretary would have direct control over the relevant Codes of Practice.

Users would get a guaranteed complaints mechanism to the intermediary companies. The government is consulting on the possibility of appointing designated organisations who would be able to make ‘super-complaints’ to the regulator.

Whilst framed as regulation of tech companies, the White Paper’s target is the activities and communications of online users. Ofweb would regulate social media and internet users at one remove. It would be an online sheriff armed with the power to decide and police, via its online intermediary deputies, what users can and cannot say online.

Which lawful content would count as harmful is not defined. The White Paper provides an ‘initial’ list of content and behaviour that would be in scope: cyberbullying and trolling; extremist content and activity; coercive behaviour; intimidation; disinformation; violent content; advocacy of self-harm; promotion of Female Genital Mutilation (FGM).

This is not a list that could readily be transposed into legislation, even if that were the government’s intention. Some of the topics - FGM, for instance – are more specific than others. But most are almost as unclear as ‘harmful’ itself. For instance the White Paper gives no indication as to what would amount to trolling. It says only that ‘cyberbullying, including trolling, is unacceptable’. It could as well have said ‘behaving badly is unacceptable’.

In any event the White Paper leaves the strong impression that the legislation would eschew even that level of specificity and build the regulatory structure simply on the concept of ‘harmful’.

The White Paper does not say in terms how the ‘initial’ list of content and behaviour in scope would be extended. It seems that the regulator would decide:

“This list is, by design, neither exhaustive nor fixed. A static list could prevent swift regulatory action to address new forms of online harm, new technologies, content and new online activities.” [2.2]
In that event Ofweb would effectively have the power to decide what should and should not be regarded as harmful.

The White Paper proposes some exclusions: harms suffered by companies as opposed to individuals, data protection breaches, harms suffered by individuals resulting directly from a breach of cyber security or hacking, and all harms suffered by individuals on the dark web rather than the open internet.


Here is a visualisation of the White Paper proposals, alongside comparable offline duties of care. 

  
Good intentions

The White Paper is suffused with good intentions. It sets out to forge a single sword of truth and righteousness with which to assail all manner of online content from terrorist propaganda to offensive material.

However, flying a virtuous banner is no guarantee that the army is marching in the right direction. Nor does it preclude the possibility that specialised units would be more effective.

The government presents this all-encompassing approach as a virtue, contrasted with:
“a range of UK regulations aimed at specific online harms or services in scope of the White Paper, but [which] creates a fragmented regulatory environment which is insufficient to meet the full breadth of the challenges we face” [2.5].
An aversion to fragmentation is like saying that instead of the framework of criminal offences and civil liability, focused on specific kinds of conduct, that make up our mosaic of offline laws we should have a single offence of Behaving Badly.

We could not contemplate such a universal offence with equanimity. A Law against Behaving Badly would be so open to subjective and arbitrary interpretation as to be the opposite of law: rule by ad hoc command. Assuredly it would fail to satisfy the rule of law requirement of reasonable certainty. By the same token we should treat with suspicion anything that smacks of a universal Law against Behaving Badly Online.

In placing an undefined and unbounded notion of harm at the centre of its proposals for a universal duty of care, the government has set off down that path.

Three degrees of undefined harm

Harm is an amorphous concept. It changes shape according to the opinion of whoever is empowered to apply it: in the government’s proposal, Ofweb.

Even when limited to harm suffered by an individual, harm is an ambiguous term. It will certainly include objectively ascertainable physical injury – the kind of harm to which comparable offline duties of care are addressed.

But it may also include subjective harms, dependent on someone’s own opinion that they have suffered what they regard as harm. When applied to speech, this is highly problematic. One person may enjoy reading a piece of searing prose. Another may be distressed. How is harm, or the risk of harm, to be determined when different people react in different ways to what they are reading or hearing? Is distress enough to render something harmful? What about mild upset, or moderate annoyance? Does offensiveness inflict harm? At its most fundamental, is speech violence? 

‘Harm’ as such has no identifiable boundaries, at least none that would pass a legislative certainty test.

This is particularly evident in the White Paper’s discussion of Disinformation. In the context of anti-vaccination the White Paper notes that “Inaccurate information, regardless of intent, can be harmful”.

Having equated inaccuracy with harm, the White Paper contradictorily claims that the regulator and its online intermediary proxies can protect users from harm without policing truth or accuracy:

“We are clear that the regulator will not be responsible for policing truth and accuracy online.” [36] 
“Importantly, the code of practice that addresses disinformation will ensure the focus is on protecting users from harm, not judging what is true or not.” [7.31]
The White Paper acknowledges that:
“There will be difficult judgement calls associated with this. The government and the future regulator will engage extensively with civil society, industry and other groups to ensure action is as effective as possible, and does not detract from freedom of speech online” [7.31]
The contradiction is not something that can be cured by getting some interested parties around a table. It is the cleft stick into which a proposal of this kind inevitably wedges itself, and from which there is no escape.

A third variety of harm, yet more nebulous, can be put under the heading of ‘harm to society’. This kind of harm does not depend on identifying an individual who might be directly harmed. It tends towards pure abstraction, malleable at the will of the interpreting authority.

Harms to society feature heavily in the White Paper, for example: content or activity that:

“threatens our way of life in the UK, either by undermining national security, or by reducing trust and undermining our shared rights, responsibilities and opportunities to foster integration.”
Similarly:
“undermine our democratic values and debate”;

“encouraging us to make decisions that could damage our health, undermining our respect and tolerance for each other and confusing our understanding of what is happening in the wider world.”
This kind of prose may befit the soapbox or an election manifesto, but has no place in or near legislation.

Democratic deficit

One particular concern is the potential for a duty of care supervised by a regulator and based on a malleable notion of harm to be used as a mechanism to give effect to some Ministerial policy of the day, without the need to obtain legislation.

Thus, two weeks before the release of the White Paper Health Secretary Matt Hancock suggested that anti-vaxxers could be targeted via the forthcoming duty of care.

The White Paper duly recorded, under “Threats to our way of life”, that “Inaccurate information, regardless of intent, can be harmful – for example the spread of inaccurate anti-vaccination messaging online poses a risk to public health.” [1.23]

If a Secretary of State decides that he wants to silence anti-vaxxers, the right way to go about it is to present a Bill to Parliament, have it debated and, if Parliament agrees, pass it into law. The structure envisaged by the White Paper would create a channel whereby an ad hoc Ministerial policy to silence a particular group or kind of speech could be framed as combating an online harm, pushed to the regulator then implemented by its online intermediary proxies. Such a scheme has democratic deficit hard baked into it.

Perhaps in recognition of this, the government is consulting on whether Parliament should play a role in developing or approving Ofweb’s Codes of Practice. That, however, smacks more of sticking plaster than cure.

Impermissible vagueness

Building a regulatory structure on a non-specific notion of harm is not a matter of mere ambiguity, where some word in an otherwise unimpeachable statute might mean one thing or another and the court has to decide which it is. It strays beyond ambiguity into vagueness and gives rise to rule of law issues.

The problem with vagueness was stated was spelt out by the House of Lords in R v Rimmington, citing the US case of Grayned:

"Vagueness offends several important values … A vague law impermissibly delegates basic policy matters to policemen, judges and juries for resolution on an ad hoc and subjective basis, with the attendant dangers of arbitrary and discriminatory application."
Whilst most often applied to criminal liability, the objection to vagueness is more fundamental than that. It is a constitutional principle that applies to the law generally. Lord Diplock referred to it in a 1975 civil case (Black-Clawson):
"The acceptance of the rule of law as a constitutional principle requires that a citizen, before committing himself to any course of action, should be able to know in advance what are the legal consequences that will flow from it."
Certainty is a particular concern with a law that has consequences for individuals' speech. In the context of a social media duty of care the rule of law requires that users must be able to know with reasonable certainty in advance what of their speech is liable to be the subject of preventive or mitigating action by a platform operator subject to the duty of care.

If the duty of care is based on an impermissibly vague concept such as ‘harm’, then the legislation has a rule of law problem. It is not necessarily cured by empowering the regulator to clothe the skeleton with codes of practice and interpretations, for three reasons: 

First, impermissibly vague legislation does not provide a skeleton at all – more of a canvas on to which the regulator can paint at will; 

Second, if it is objectionable for the legislature to delegate basic policy matters to policemen, judges and juries it is unclear why it is any less objectionable to do so to a regulator; 

Third, regulator-made law is a moveable feast.

All power to the sheriff

From a rule of law perspective undefined harm ought not to take centre stage in legislation.

However if the very idea is to maximise the power and discretion of a regulator, then inherent vagueness in the legislation serves the purpose very well. The vaguer the remit, the more power is handed to the regulator to devise policy and make law.

John Humphrys, perhaps unwittingly, put his finger on it during the Today programme on 8 April 2019 
(4:00 onwards). Joy Hyvarinen of Index on Censorship pointed out how broadly Ofcom had interpreted harm in its 2018 survey, to which John Humphrys retorted: “You deal with that by defining [harm] more specifically, surely". 

That would indeed be an improvement. But what interest would a government intent on creating a powerful regulator, not restricted to a static list of in-scope content and behaviour, have in cramping the regulator’s style with strict rules and carefully limited definitions of harm? In this scheme of things breadth and vagueness are not faults but a means to an end.

There is a precedent for this kind of approach in broadcast regulation. The Communications Act 2003 refers to 'offensive and harmful', makes no attempt to define them and leaves it to Ofcom to decide what they mean. Ofcom is charged with achieving the objective: 
“that generally accepted standards are applied to the contents of television and radio services so as to provide adequate protection for members of the public from the inclusion in such services of offensive and harmful material”.
William Perrin and Professor Lorna Woods, whose work on duties of care has influenced the White Paper, say of the 2003 Act that: 
"competent regulators have had little difficulty in working out what harm means" [37]. 
They endorse Baroness Grender’s contribution to a House of Lords debate in November 2018, in which she asked: 
"Why did we understand what we meant by "harm" in 2003 but appear to ask what it is today?"
The answer is that in 2003 the legislators did not have to understand what the vague term 'harm' meant because they gave Ofcom the power to decide. It is no surprise if Ofcom has had little difficulty, since it is in reality not 'working out what harm means' but deciding on its own meanings. It is, in effect, performing a delegated legislative function.

Ofweb would be in the same position, effectively exercising a delegated power to decide what is and is not harmful.

Broadcast regulation is an exception from the norm that speech is governed only by the general law. Because of its origins in spectrum scarcity and the perceived power of the medium, it has been considered acceptable to impose stricter content rules and a discretionary style of regulation on broadcast, in addition to the general laws (defamation, obscenity and so on) that apply to all speech.

That does not, however, mean that a similar approach is appropriate for individual speech. Vagueness goes hand in hand with arbitrary exercise of power. If this government had set out to build a scaffold from which to hang individual online speech, it could hardly have done better.

The duty of care that isn’t

Lastly, it is notable that as far as can be discerned from the White Paper the proposed duty of care is not really a duty of care at all.

A duty of care properly so called is a legal duty owed to identifiable persons. They can claim damages if they suffer injury caused by a breach of the duty. Common law negligence and liability under the Occupiers’ Liability Act 1957 are examples. These are typically limited to personal injury and damage to physical property; and only rarely impose a duty on, say, an occupier, to prevent visitors injuring each other. An occupier owes no duty in respect of what visitors to the property say to each other.

The absence in the White Paper of any nexus between the duty of care and individual persons would allow Ofweb’s remit to be extended beyond injury to individuals and into the nebulous realm of harms to society. That, as discussed above, is what the White Paper proposes.

Occasionally a statute creates something that it calls a duty of care, but which in reality describes a duty owed to no-one in particular, breach of which is (for instance) a criminal offence.

An example is s.34 of the Environmental Protection Act 1990, which creates a statutory duty in respect of waste disposal. As would be expected of such a statute, s.34 is precise about the conduct that is in scope of the duty. In contrast, the White Paper proposes what is in effect a universal online ‘Behaving Badly’ law.

Even though the Secretary of State referred in a recent letter to the Society of Editors to “A duty of care between companies and their users”, the ‘duty of care’ described in the White Paper is something quite different from a duty of care properly so called.

The White Paper’s duty of care is a label applied to a regulatory framework that would give Ofweb discretion to decide what user communications and activities on the internet should be deemed harmful, and the power to enlist proxies such as social media companies to sniff and snuff them out, and to take action against an in scope company if it does not comply.

This is a mechanism for control of individual speech such as would not be contemplated offline and is fundamentally unsuited to what individuals do and say online.


[Added visualisation 24 May 2019. Amended 19 July 2019 to make clear that the regulator might be an existing or new body - see Consultation Q.10.]