When I first read the Online Harms
White Paper in April last year, John Naughton in the Observer quoted my comment that if the road to hell was paved
with good intentions, then this was a motorway; to which he riposted that the longest journey begins with a single step and the White Paper was it. Fourteen months down the road, has anything altered
my view?
There were many reasons for thinking
that at the time. To pick some:- Undefined harm Not only would the proposed duty of care include lawful but harmful speech, but there was no attempt to define what might be meant by harm.
- Subjectivity In the context of speech, harm is subjective. That provides the regulator with virtually unlimited discretion to decide what should be regarded as harmful.
- Democratic deficit The regulator would, in effect, be tasked with constructing a parallel set of rules about individual speech. That would supplant the statute book – with all its carefully debated, constructed and balanced provisions about offline and online speech. If you delegate an all-encompassing law of everything to the discretion of a regulator, inevitably the regulator will end up making decisions on matters of principle that ought to be the sole preserve of Parliament.
- Rule of law Regulation by fiat of a discretionary regulator challenges the rule of law. Vagueness and unfettered discretion may provide flexibility, but they offend against the legality principle – the requirement of clarity and certainty in the rules that govern us. That principle is long established both as part of the European Convention on Human Rights and, for centuries before that, as domestic English law. Breach of the legality principle is especially objectionable where a foundational right such as freedom of speech is concerned.
- Online-offline equivalence In the offline world, safety-related duties of care applied – with good reason - only to objectively ascertainable harm such as personal injury. In relation to visitors (and online users are the direct equivalent of visitors to offline premises) such duties of care hardly ever apply in respect of injury caused by one visitor to another; and never in respect of what visitors say to each other.It is a fiction to suppose that the proposed online harms legislation would translate existing offline duties of care into an equivalent duty online. The government has taken an offline duty of care vehicle, stripped out its limiting controls and safety features, and now plans to set it loose in an environment – governance of individual speech - to which it is entirely unfitted.
- Who is being regulated? The duty of care scheme is presented as being about regulating platforms. But that is not the truth of it. It is we individual users whose speech will be regulated. It is we users who will be liable to have our speech suppressed by online intermediaries acting as a posse of co-opted deputies of an online sheriff – a sheriff equipped with the power to write its own laws.
- Broadcast-style regulation is the exception, not the norm. In domestic UK legislation it has never been thought appropriate, either offline or online, to subject individual speech to the control of a broadcast-style discretionary regulator. That is true for the internet as in any other medium.
That was April last year. Since
then we have seen the government’s Initial Consultation Response in February
this year; and recently several Ministers have appeared before select committees. What has changed?
In respect of undefined harm,
nothing. The impression that harm is whatever the regulator decides it is was
reinforced in one of the recent Select Committee hearings, when the DCMS
Minister said:
“We want to make sure that this piece of legislation will be
agile and able to respond to harms as they emerge. The legislation will make
that clearer, but it will be for the regulator to outline what the harms are
and to do that in partnership with the platforms.” (Caroline Dinenage, Home
Affairs Committee, 13 May 2020)
The main development in the Initial
Consultation Response was the shift to a differentiated duty of care. This, we
are told, means that for lawful but harmful content seen by adults, intermediaries
will be free to set content standards in their terms and conditions. The
interest of the regulator will be in ensuring that the T&Cs are enforced transparently
and consistently – and perhaps effectively, depending on which section of the
Initial Response you read.
Similarly, we are told that the
regulator will be concerned with systems and processes to deal with online
harms, not requiring the removal of specific pieces of legal content.
But is this really all that it
seems? If effectiveness is a criterion, what does that mean? Is it about effectiveness
in reducing harm? If so, we are back to that being based on the regulator’s
view of what constitutes harm.
Nor can we ignore Ministers' apparent enthusiasm
for influencing platforms as to what should be in their terms
and conditions and what their algorithms should - or at least should not - be doing; all of
which is evident in the recent Committee hearings. I am very much afraid that
this professed shift towards a differentiated duty of care is not quite what it
might seem.
Of course, we will be assured that the
legislation will be used sensibly and proportionately. And, no doubt, the
regulator will be required to have regard to the fundamental right of freedom
of expression. But that doesn’t really cut it. You cannot cure a hole in the
heart by the statutory equivalent of exhorting the patient to get better.
Let us take an example. In the
recent Home Affairs Committee session discussing 5G conspiracy theories the
Home Office Lords Minister had said that 5G disinformation could be divided
into “harmless conspiracy theories” and “that which actually leads to attacks
on engineers”.
A Committee member challenged this.
She said she did not think that any element of the conspiracy theory could be
categorised as ‘harmless’, because - and this is the important bit - “it is threatening public confidence in the 5G
roll-out”. Then, to my astonishment if no-else’s, the DCMS Minister agreed.
Pausing for a moment, the harm that
is being identified here is people changing their opinion about the benefit of
a telecommunications project.
On that basis adverse opinions
about HS2, about the Heathrow 3rd runway, about any major public project,
could be deemed harmful on the basis that the opinions were misinformed.
Finding ourselves in that kind of
territory is, unfortunately, the inevitable result of the government’s refusal
to define harm. Where speech is concerned, undefined harm is infinitely subjective
and infinitely malleable.
It is easy to respond to objections
of principle by saying: but children, terrorism, misinformation, cyberbullying,
racism, harassment, revenge porn, abuse, and everything else in the long list
of ills that are laid at the door of the internet and social media. These are undoubtedly
serious matters. But none of them relieves a government of the responsibility
of devising policies and legislation in a fashion that pays proper regard to constitutional
principles forged over centuries of respect for the rule of law.
There are times when, engaging in this
kind of commentary, one begins to doubt one’s own perspective. Hardly anyone
seems to be listening, the government ploughs on regardless and gives no sign
of acknowledging that these issues even exist, never mind addressing them.
But last week, an event occurred that
restores some belief that no, we have not been catapulted into some anti-matter
universe in which the fundamental principles of how to legislate for individual
speech have been turned on their head.
The French Constitutional Council
struck down large parts of the French hate speech law on grounds that refer to many
of the same objections of principle that have been levelled at the UK government’s
Online Harms proposals: vagueness; subjectivity; administrative discretion; lack
of due process; prior restraint; chilling effect; and others.
The French law is concerned with
platform obligations in relation to illegality. Such objections apply all the
more to the UK government’s proposals, since they extend beyond illegality to
lawful but harmful material.
The vulnerability of the UK
proposals is no accident. It stems directly from the foundational design
principles that underpin it: the flawed concept of an unbounded universal duty
of care in relation to undefined harm.
Heading down the “law of
everything” road was always going to land the government in the morass of
complexity and arbitrariness in which it now finds itself. One of the core precepts
of the White Paper is imposing safety by design obligations on intermediaries. But
if anything is unsafe by design, it is this legislation.
[This post is based on a panel presentation to the Westminster e-Forum event on Online Regulation on 23 June 2020. A compilation of Cyberleagle posts on Online Harms is maintained here.]
No comments:
Post a Comment
Note: only a member of this blog may post a comment.