Wednesday 24 June 2020

Online Harms Revisited

When I first read the Online Harms White Paper in April last year, John Naughton in the Observer quoted my comment that if the road to hell was paved with good intentions, then this was a motorway; to which he riposted that the longest journey begins with a single step and the White Paper was it. Fourteen months down the road, has anything altered my view?
There were many reasons for thinking that at the time. To pick some:
  1. Undefined harm Not only would the proposed duty of care include lawful but harmful speech, but there was no attempt to define what might be meant by harm.
  2. Subjectivity In the context of speech, harm is subjective. That provides the regulator with virtually unlimited discretion to decide what should be regarded as harmful.
  3. Democratic deficit The regulator would, in effect, be tasked with constructing a parallel set of rules about individual speech. That would supplant the statute book – with all its carefully debated, constructed and balanced provisions about offline and online speech. If you delegate an all-encompassing law of everything to the discretion of a regulator, inevitably the regulator will end up making decisions on matters of principle that ought to be the sole preserve of Parliament.
  4. Rule of law Regulation by fiat of a discretionary regulator challenges the rule of law. Vagueness and unfettered discretion may provide flexibility, but they offend against the legality principle – the requirement of clarity and certainty in the rules that govern us. That principle is long established both as part of the European Convention on Human Rights and, for centuries before that, as domestic English law. Breach of the legality principle is especially objectionable where a foundational right such as freedom of speech is concerned.
  5. Online-offline equivalence In the offline world, safety-related duties of care applied – with good reason - only to objectively ascertainable harm such as personal injury. In relation to visitors (and online users are the direct equivalent of visitors to offline premises) such duties of care hardly ever apply in respect of injury caused by one visitor to another; and never in respect of what visitors say to each other.
    It is a fiction to suppose that the proposed online harms legislation would translate existing offline duties of care into an equivalent duty online. The government has taken an offline duty of care vehicle, stripped out its limiting controls and safety features, and now plans to set it loose in an environment – governance of individual speech - to which it is entirely unfitted.
  6. Who is being regulated? The duty of care scheme is presented as being about regulating platforms. But that is not the truth of it. It is we individual users whose speech will be regulated.  It is we users who will be liable to have our speech suppressed by online intermediaries acting as a posse of co-opted deputies of an online sheriff – a sheriff equipped with the power to write its own laws.
  7. Broadcast-style regulation is the exception, not the norm. In domestic UK legislation it has never been thought appropriate, either offline or online, to subject individual speech to the control of a broadcast-style discretionary regulator. That is true for the internet as in any other medium.

That was April last year. Since then we have seen the government’s Initial Consultation Response in February this year; and recently several Ministers have appeared before select committees.  What has changed?
In respect of undefined harm, nothing. The impression that harm is whatever the regulator decides it is was reinforced in one of the recent Select Committee hearings, when the DCMS Minister said:

“We want to make sure that this piece of legislation will be agile and able to respond to harms as they emerge. The legislation will make that clearer, but it will be for the regulator to outline what the harms are and to do that in partnership with the platforms.” (Caroline Dinenage, Home Affairs Committee, 13 May 2020)

The main development in the Initial Consultation Response was the shift to a differentiated duty of care. This, we are told, means that for lawful but harmful content seen by adults, intermediaries will be free to set content standards in their terms and conditions. The interest of the regulator will be in ensuring that the T&Cs are enforced transparently and consistently – and perhaps effectively, depending on which section of the Initial Response you read.
Similarly, we are told that the regulator will be concerned with systems and processes to deal with online harms, not requiring the removal of specific pieces of legal content.
But is this really all that it seems? If effectiveness is a criterion, what does that mean? Is it about effectiveness in reducing harm? If so, we are back to that being based on the regulator’s view of what constitutes harm.
Nor can we ignore Ministers' apparent enthusiasm for influencing platforms as to what should be in their terms and conditions and what their algorithms should - or at least should not - be doing; all of which is evident in the recent Committee hearings. I am very much afraid that this professed shift towards a differentiated duty of care is not quite what it might seem.
Of course, we will be assured that the legislation will be used sensibly and proportionately. And, no doubt, the regulator will be required to have regard to the fundamental right of freedom of expression. But that doesn’t really cut it. You cannot cure a hole in the heart by the statutory equivalent of exhorting the patient to get better.
Let us take an example. In the recent Home Affairs Committee session discussing 5G conspiracy theories the Home Office Lords Minister had said that 5G disinformation could be divided into “harmless conspiracy theories” and “that which actually leads to attacks on engineers”.
A Committee member challenged this. She said she did not think that any element of the conspiracy theory could be categorised as ‘harmless’, because - and this is the important bit -  “it is threatening public confidence in the 5G roll-out”. Then, to my astonishment if no-else’s, the DCMS Minister agreed.
Pausing for a moment, the harm that is being identified here is people changing their opinion about the benefit of a telecommunications project.
On that basis adverse opinions about HS2, about the Heathrow 3rd runway, about any major public project, could be deemed harmful on the basis that the opinions were misinformed.
Finding ourselves in that kind of territory is, unfortunately, the inevitable result of the government’s refusal to define harm. Where speech is concerned, undefined harm is infinitely subjective and infinitely malleable.
It is easy to respond to objections of principle by saying: but children, terrorism, misinformation, cyberbullying, racism, harassment, revenge porn, abuse, and everything else in the long list of ills that are laid at the door of the internet and social media. These are undoubtedly serious matters. But none of them relieves a government of the responsibility of devising policies and legislation in a fashion that pays proper regard to constitutional principles forged over centuries of respect for the rule of law.
There are times when, engaging in this kind of commentary, one begins to doubt one’s own perspective. Hardly anyone seems to be listening, the government ploughs on regardless and gives no sign of acknowledging that these issues even exist, never mind addressing them.
But last week, an event occurred that restores some belief that no, we have not been catapulted into some anti-matter universe in which the fundamental principles of how to legislate for individual speech have been turned on their head.
The French Constitutional Council struck down large parts of the French hate speech law on grounds that refer to many of the same objections of principle that have been levelled at the UK government’s Online Harms proposals: vagueness; subjectivity; administrative discretion; lack of due process; prior restraint; chilling effect; and others.
The French law is concerned with platform obligations in relation to illegality. Such objections apply all the more to the UK government’s proposals, since they extend beyond illegality to lawful but harmful material.
The vulnerability of the UK proposals is no accident. It stems directly from the foundational design principles that underpin it: the flawed concept of an unbounded universal duty of care in relation to undefined harm.
Heading down the “law of everything” road was always going to land the government in the morass of complexity and arbitrariness in which it now finds itself. One of the core precepts of the White Paper is imposing safety by design obligations on intermediaries. But if anything is unsafe by design, it is this legislation.
[This post is based on a panel presentation to the Westminster e-Forum event on Online Regulation on 23 June 2020. A compilation of Cyberleagle posts on Online Harms is maintained here.] 

Saturday 20 June 2020

Online Harms and the Legality Principle

The government’s Online Harms proposals, seeking to impose obligations on online intermediaries to suppress and inhibit some kinds of content posted by users, have been dogged from the outset by questionable compatibility with the rule of law. The situation has not been improved by the government’s response to its White Paper consultation and a recent round of Ministerial select committee appearances.

The rule of law issue is that a restriction on freedom must take its authority from something that can properly be described as law. Not every state edict will do, even if it has passed through appropriate legislative procedures.

This is the legality principle.  Pertinently, the French Constitutional Council has recently held unconstitutional — in some respects for breach of the legality principle — large parts of the new French hate speech legislation (the loi Avia), which imposes content removal obligations on online intermediaries.

The European Convention on Human Rights articulates the legality principle in terms that a restriction on a Convention right must be ‘prescribed by law’, or be ‘in accordance with the law’.

The legality principle is also part of the common law tradition. Lord Sumption observed last year in Re Gallagher (as to which, more below) that the principle goes back at least as far as the American founding father John Adams: “a government of laws and not of men”. (So those of you who may be sharpening your knives at the mention of the European Convention, put them away.)  

That was echoed by Lady Hale in the same case:
“The foundation of the principle of legality is the rule of law itself - that people are to be governed by laws not men. They must not be subjected to the arbitrary - that is, the unprincipled, whimsical or inconsistent - decisions of those in power.”

The legality principle has two aspects. The first is that the law be publicly accessible. The second — the aspect that concerns us here — is that something that purports to be law must have the quality of law: it must be possible for someone to foresee, with reasonable certainty, whether their contemplated conduct is liable to be affected by the law or not.

Lord Diplock, in a 1975 case, described the principle as constitutional:
"The acceptance of the rule of law as a constitutional principle requires that a citizen, before committing himself to any course of action, should be able to know in advance what are the legal consequences that will flow from it." (Black-Clawson)   

The proposed Online Harms legislation falls squarely within that principle, since internet users are liable to have their posts, tweets, online reviews and every other kind of public or semi-public communication interfered with by the platform to which they are posting, as a result of the duty of care to which the platform would be subject. Users, under the principle of legality, must be able to able to foresee, with reasonable certainty, whether the intermediary would be legally obliged to interfere with what they are about to say online.  

Legislation may fall foul of the legality principle in two main ways: impermissible vagueness or excessive discretionary power.  Lady Hale in Re Gallagher again:
“The law will not be sufficiently predictable if it is too broad, too imprecise or confers an unfettered discretion on those in power.” [73]

A simple criminal offence of ‘behaving badly’ would fail the legality test for impermissible vagueness — since no-one could predict in advance with any, let alone reasonable, certainty whether their behaviour would be regarded as criminal. 

Vagueness was a ground on which, among others, the French Constitutional Council decided that an aspect of the French loi Avia contributed to unconstitutionality. The provision in question specified that an intentional breach of the intermediary’s obligation could arise from the absence of a proportionate and necessary examination of notified content. That was not expressed in terms that enabled the scope of liability to be determined. Although supporting a finding of unconstitutionality by reason of lack of necessity and proportionality, that is the kind of analysis that is also relevant to legality. 

The second way of failing the legality test is when legislation confers excessive discretionary power on a state official or body.  A purely discretionary power of ad hoc command does not suffice. Lord Sumption in Re Gallagher said:
“The measure must not therefore confer a discretion so broad that its scope is in practice dependent on the will of those who apply it, rather than on the law itself. Nor should it be couched in terms so vague or so general as to produce substantially the same effect in practice.”

A discretionary power exercised on defined principles, and (if necessary) accompanied by safeguards against abuse, is capable of satisfying the legality test if the result is to render the effect of the power sufficiently foreseeable. Lord Sumption again:
“Thus a power whose exercise is dependent on the judgment of an official as to when, in what circumstances or against whom to apply it, must be sufficiently constrained by some legal rule governing the principles on which that decision is made.”

Underlying both kinds of failure to satisfy the legality test is arbitrariness. Exercise of an unfettered discretionary power is inherently arbitrary, since officials may wield the power as they please.
“an excessively broad discretion in the application of a measure infringing the right of privacy is likely to amount to an exercise of power unconstrained by law. It cannot therefore be in accordance with the law unless there are sufficient safeguards, exercised on known legal principles, against the arbitrary exercise of that discretion, so as to make its exercise reasonably foreseeable.” (Re Gallagher at [31])

English courts have interpreted the European Court of Human Rights caselaw as requiring safeguards to have the effect of enabling the proportionality of the interference to be adequately examined. (R(T) [118], Re Gallagher) [36] and [39]

Arbitrariness is also the vice of an impermissibly vague law:
"Vagueness offends several important values … A vague law impermissibly delegates basic policy matters to policemen, judges and juries for resolution on an ad hoc and subjective basis, with the attendant dangers of arbitrary and discriminatory application." (Rv Rimmington (House of Lords) citing the US case of Grayned)

“the statements in R(T) about the need for safeguards against “arbitrary” interference …  [refer] to safeguards essential to the rule of law because they protect against the abuse of imprecise rules or unfettered discretionary powers.” (Re Gallagher at [41]

The recent French Constitutional Council decision provides an example of this kind of assessment, albeit in the context of necessity and proportionality rather than the legality principle. The loi Avia empowers an administrative authority to require a host to remove certain terrorist or child pornography content within one hour. This was objectionable because determination of illegality was in the sole opinion of the administrative authority and did not rest on the manifest character of the content. Nor was there any opportunity for the host to obtain a judicial ruling.

The UK online harms project is vulnerable for both reasons: impermissible vagueness and discretionary power amounting to ad hoc command. It is potentially more exposed to challenge than the French law, given that it extends beyond illegality to lawful but harmful material and activity.

The main vagueness objection to the Online Harms proposals stems from the apparent determination of the government to leave the concept of harm undefined. The vagueness inherent in undefined harm is discussed here.  

The White Paper and the Initial Response gave the impression that the government would leave it to the regulator to decide what is harmful — an impression recently reinforced by a DCMS Minister:
“We want to make sure that this piece of legislation will be agile and able to respond to harms as they emerge. The legislation will make that clearer, but it will be for the regulator to outline what the harms are and to do that in partnership with the platforms.” (Caroline Dinenage, Home Affairs Committee, 13 May 2020)

Any proposed legislation is likely to incorporate some attempt to articulate principles on the basis of which the discretionary power is to be exercised and safeguards intended to protect against abuse of the power. What form those principles and safeguards might take, and whether they would be capable of remedying the intrinsic legality problem, are open questions.

The government would no doubt be tempted to address such issues by including statutory obligations on the regulator, for instance to have regard to the fundamental right of freedom of expression and to act proportionately. That may be better than nothing. But can a congenital defect in legislation really be cured by the statutory equivalent of exhorting the patient to get better? It is akin to putting a sticking plaster on a hole in the heart.

The ECtHR caselaw consistently emphasises the need for clear and precise rules, for discretionary powers to be clear in scope and for safeguards to be clearly and precisely expressed.

Could Codes of Practice issued by a regulator remedy a lack of clarity? In principle that cannot be ruled out — that kind of gap-filling has been effective in the context of surveillance powers — but would Codes of Practice in this area amount to more than high level principles and a collection of ad-hoc examples? Even if they formed a coherent set of concrete rules, the regulator’s views about harm would sit alongside, and effectively supplant, the existing, carefully crafted, set of laws governing the speech of individuals.

In the context of rules about speech, that amounts to accepting wholesale delegation of lawmaking power from Parliament to a regulator, in respect of what is often regarded as the most foundational right.

As the French Constitutional Council observed:
“[F]reedom of expression and communication is all the more precious since its exercise is a condition of democracy and one of the guarantees of respect for other rights and freedoms.”

The French Constitutional Council decision is a salutary reminder that fundamental rights issues are not the sole preserve of free speech purists, nor mere legal pedantry to be brushed aside in the eagerness to do something about the internet and social media. These questions go to the heart of the legitimacy of the government’s proposals.

[Amended 22 June 2020 to make clear that the French Constitutional Council analyses were in the context of findings of lack of necessity and proportionality.]