Last Monday, having spent the best part of a day reading the UK government's Online Harms White Paper, I concluded that if the road to hell was paved with good intentions, this was a motorway.
Nearly two weeks on, after full and further consideration, I have found nothing to alter that view. This is why.
The White Paper
Nearly two weeks on, after full and further consideration, I have found nothing to alter that view. This is why.
The White Paper
First, a reminder of what the White Paper proposes. The government intends to legislate for a statutory ‘duty of care’ on social media platforms and a wide range of other internet companies that "allow users to share or discover user-generated content, or interact with each other online". This could range from public discussion forums to sites carrying user reviews, to search engines, messaging providers, file sharing sites, cloud hosting providers and many others.
The duty of care would require them to “take more responsibility for the safety of their users and tackle harm caused by content or activity on their services”. This would apply not only to illegal content and activities, but also to lawful material regarded as harmful.
The duty of care would be overseen and enforced by a regulator armed with power to fine companies for non-compliance. That might be an existing or a new body (call it Ofweb).
Good intentions
The White Paper is suffused with good intentions. It sets out to forge a single sword of truth and righteousness with which to assail all manner of online content from terrorist propaganda to offensive material.
However, flying a virtuous banner is no guarantee that the army is marching in the right direction. Nor does it preclude the possibility that specialised units would be more effective.
The government presents this all-encompassing approach as a virtue, contrasted with:
We could not contemplate such a universal offence with equanimity. A Law against Behaving Badly would be so open to subjective and arbitrary interpretation as to be the opposite of law: rule by ad hoc command. Assuredly it would fail to satisfy the rule of law requirement of reasonable certainty. By the same token we should treat with suspicion anything that smacks of a universal Law against Behaving Badly Online.
In placing an undefined and unbounded notion of harm at the centre of its proposals for a universal duty of care, the government has set off down that path.
Three degrees of undefined harm
Harm is an amorphous concept. It changes shape according to the opinion of whoever is empowered to apply it: in the government’s proposal, Ofweb.
Even when limited to harm suffered by an individual, harm is an ambiguous term. It will certainly include objectively ascertainable physical injury – the kind of harm to which comparable offline duties of care are addressed.
But it may also include subjective harms, dependent on someone’s own opinion that they have suffered what they regard as harm. When applied to speech, this is highly problematic. One person may enjoy reading a piece of searing prose. Another may be distressed. How is harm, or the risk of harm, to be determined when different people react in different ways to what they are reading or hearing? Is distress enough to render something harmful? What about mild upset, or moderate annoyance? Does offensiveness inflict harm? At its most fundamental, is speech violence?
That would indeed be an improvement. But what interest would a government intent on creating a powerful regulator, not restricted to a static list of in-scope content and behaviour, have in cramping the regulator’s style with strict rules and carefully limited definitions of harm? In this scheme of things breadth and vagueness are not faults but a means to an end.
There is a precedent for this kind of approach in broadcast regulation. The Communications Act 2003 refers to 'offensive and harmful', makes no attempt to define them and leaves it to Ofcom to decide what they mean. Ofcom is charged with achieving the objective:
Ofweb would be in the same position, effectively exercising a delegated power to decide what is and is not harmful.
Broadcast regulation is an exception from the norm that speech is governed only by the general law. Because of its origins in spectrum scarcity and the perceived power of the medium, it has been considered acceptable to impose stricter content rules and a discretionary style of regulation on broadcast, in addition to the general laws (defamation, obscenity and so on) that apply to all speech.
That does not, however, mean that a similar approach is appropriate for individual speech. Vagueness goes hand in hand with arbitrary exercise of power. If this government had set out to build a scaffold from which to hang individual online speech, it could hardly have done better.
The duty of care that isn’t
Lastly, it is notable that as far as can be discerned from the White Paper the proposed duty of care is not really a duty of care at all.
A duty of care properly so called is a legal duty owed to identifiable persons. They can claim damages if they suffer injury caused by a breach of the duty. Common law negligence and liability under the Occupiers’ Liability Act 1957 are examples. These are typically limited to personal injury and damage to physical property; and only rarely impose a duty on, say, an occupier, to prevent visitors injuring each other. An occupier owes no duty in respect of what visitors to the property say to each other.
The absence in the White Paper of any nexus between the duty of care and individual persons would allow Ofweb’s remit to be extended beyond injury to individuals and into the nebulous realm of harms to society. That, as discussed above, is what the White Paper proposes.
Occasionally a statute creates something that it calls a duty of care, but which in reality describes a duty owed to no-one in particular, breach of which is (for instance) a criminal offence.
An example is s.34 of the Environmental Protection Act 1990, which creates a statutory duty in respect of waste disposal. As would be expected of such a statute, s.34 is precise about the conduct that is in scope of the duty. In contrast, the White Paper proposes what is in effect a universal online ‘Behaving Badly’ law.
Even though the Secretary of State referred in a recent letter to the Society of Editors to “A duty of care between companies and their users”, the ‘duty of care’ described in the White Paper is something quite different from a duty of care properly so called.
The White Paper’s duty of care is a label applied to a regulatory framework that would give Ofweb discretion to decide what user communications and activities on the internet should be deemed harmful, and the power to enlist proxies such as social media companies to sniff and snuff them out, and to take action against an in scope company if it does not comply.
This is a mechanism for control of individual speech such as would not be contemplated offline and is fundamentally unsuited to what individuals do and say online.
[Added visualisation 24 May 2019. Amended 19 July 2019 to make clear that the regulator might be an existing or new body - see Consultation Q.10. Grammatical error corrected 1 March 2023.]
The duty of care would be overseen and enforced by a regulator armed with power to fine companies for non-compliance. That might be an existing or a new body (call it Ofweb).
Ofweb would set out rules in Codes of Practice that the intermediary companies should follow to comply with their duty of care. For terrorism and child sexual abuse material the Home Secretary would have direct control over the relevant Codes of Practice.
Users would get a guaranteed complaints mechanism to the intermediary companies. The government is consulting on the possibility of appointing designated organisations who would be able to make ‘super-complaints’ to the regulator.
Whilst framed as regulation of tech companies, the White Paper’s target is the activities and communications of online users. Ofweb would regulate social media and internet users at one remove. It would be an online sheriff armed with the power to decide and police, via its online intermediary deputies, what users can and cannot say online.
Which lawful content would count as harmful is not defined. The White Paper provides an ‘initial’ list of content and behaviour that would be in scope: cyberbullying and trolling; extremist content and activity; coercive behaviour; intimidation; disinformation; violent content; advocacy of self-harm; promotion of Female Genital Mutilation (FGM).
This is not a list that could readily be transposed into legislation, even if that were the government’s intention. Some of the topics - FGM, for instance – are more specific than others. But most are almost as unclear as ‘harmful’ itself. For instance the White Paper gives no indication as to what would amount to trolling. It says only that ‘cyberbullying, including trolling, is unacceptable’. It could as well have said ‘behaving badly is unacceptable’.
In any event the White Paper leaves the strong impression that the legislation would eschew even that level of specificity and build the regulatory structure simply on the concept of ‘harmful’.
The White Paper does not say in terms how the ‘initial’ list of content and behaviour in scope would be extended. It seems that the regulator would decide:
The White Paper proposes some exclusions: harms suffered by companies as opposed to individuals, data protection breaches, harms suffered by individuals resulting directly from a breach of cyber security or hacking, and all harms suffered by individuals on the dark web rather than the open internet.
Here is a visualisation of the White Paper proposals, alongside comparable offline duties of care.
Users would get a guaranteed complaints mechanism to the intermediary companies. The government is consulting on the possibility of appointing designated organisations who would be able to make ‘super-complaints’ to the regulator.
Whilst framed as regulation of tech companies, the White Paper’s target is the activities and communications of online users. Ofweb would regulate social media and internet users at one remove. It would be an online sheriff armed with the power to decide and police, via its online intermediary deputies, what users can and cannot say online.
Which lawful content would count as harmful is not defined. The White Paper provides an ‘initial’ list of content and behaviour that would be in scope: cyberbullying and trolling; extremist content and activity; coercive behaviour; intimidation; disinformation; violent content; advocacy of self-harm; promotion of Female Genital Mutilation (FGM).
This is not a list that could readily be transposed into legislation, even if that were the government’s intention. Some of the topics - FGM, for instance – are more specific than others. But most are almost as unclear as ‘harmful’ itself. For instance the White Paper gives no indication as to what would amount to trolling. It says only that ‘cyberbullying, including trolling, is unacceptable’. It could as well have said ‘behaving badly is unacceptable’.
In any event the White Paper leaves the strong impression that the legislation would eschew even that level of specificity and build the regulatory structure simply on the concept of ‘harmful’.
The White Paper does not say in terms how the ‘initial’ list of content and behaviour in scope would be extended. It seems that the regulator would decide:
“This list is, by design, neither exhaustive nor fixed. A static list could prevent swift regulatory action to address new forms of online harm, new technologies, content and new online activities.” [2.2]In that event Ofweb would effectively have the power to decide what should and should not be regarded as harmful.
The White Paper proposes some exclusions: harms suffered by companies as opposed to individuals, data protection breaches, harms suffered by individuals resulting directly from a breach of cyber security or hacking, and all harms suffered by individuals on the dark web rather than the open internet.
Here is a visualisation of the White Paper proposals, alongside comparable offline duties of care.
Good intentions
The White Paper is suffused with good intentions. It sets out to forge a single sword of truth and righteousness with which to assail all manner of online content from terrorist propaganda to offensive material.
However, flying a virtuous banner is no guarantee that the army is marching in the right direction. Nor does it preclude the possibility that specialised units would be more effective.
The government presents this all-encompassing approach as a virtue, contrasted with:
“a range of UK regulations aimed at specific online harms or services in scope of the White Paper, but [which] creates a fragmented regulatory environment which is insufficient to meet the full breadth of the challenges we face” [2.5].An aversion to fragmentation is like saying that instead of the framework of criminal offences and civil liability, focused on specific kinds of conduct, that make up our mosaic of offline laws we should have a single offence of Behaving Badly.
We could not contemplate such a universal offence with equanimity. A Law against Behaving Badly would be so open to subjective and arbitrary interpretation as to be the opposite of law: rule by ad hoc command. Assuredly it would fail to satisfy the rule of law requirement of reasonable certainty. By the same token we should treat with suspicion anything that smacks of a universal Law against Behaving Badly Online.
In placing an undefined and unbounded notion of harm at the centre of its proposals for a universal duty of care, the government has set off down that path.
Three degrees of undefined harm
Harm is an amorphous concept. It changes shape according to the opinion of whoever is empowered to apply it: in the government’s proposal, Ofweb.
Even when limited to harm suffered by an individual, harm is an ambiguous term. It will certainly include objectively ascertainable physical injury – the kind of harm to which comparable offline duties of care are addressed.
But it may also include subjective harms, dependent on someone’s own opinion that they have suffered what they regard as harm. When applied to speech, this is highly problematic. One person may enjoy reading a piece of searing prose. Another may be distressed. How is harm, or the risk of harm, to be determined when different people react in different ways to what they are reading or hearing? Is distress enough to render something harmful? What about mild upset, or moderate annoyance? Does offensiveness inflict harm? At its most fundamental, is speech violence?
‘Harm’ as such has no identifiable boundaries, at least none that would pass a legislative certainty test.
This is particularly evident in the White Paper’s discussion of Disinformation. In the context of anti-vaccination the White Paper notes that “Inaccurate information, regardless of intent, can be harmful”.
Having equated inaccuracy with harm, the White Paper contradictorily claims that the regulator and its online intermediary proxies can protect users from harm without policing truth or accuracy:
A third variety of harm, yet more nebulous, can be put under the heading of ‘harm to society’. This kind of harm does not depend on identifying an individual who might be directly harmed. It tends towards pure abstraction, malleable at the will of the interpreting authority.
Harms to society feature heavily in the White Paper, for example: content or activity that:
Democratic deficit
One particular concern is the potential for a duty of care supervised by a regulator and based on a malleable notion of harm to be used as a mechanism to give effect to some Ministerial policy of the day, without the need to obtain legislation.
Thus, two weeks before the release of the White Paper Health Secretary Matt Hancock suggested that anti-vaxxers could be targeted via the forthcoming duty of care.
The White Paper duly recorded, under “Threats to our way of life”, that “Inaccurate information, regardless of intent, can be harmful – for example the spread of inaccurate anti-vaccination messaging online poses a risk to public health.” [1.23]
If a Secretary of State decides that he wants to silence anti-vaxxers, the right way to go about it is to present a Bill to Parliament, have it debated and, if Parliament agrees, pass it into law. The structure envisaged by the White Paper would create a channel whereby an ad hoc Ministerial policy to silence a particular group or kind of speech could be framed as combating an online harm, pushed to the regulator then implemented by its online intermediary proxies. Such a scheme has democratic deficit hard baked into it.
Perhaps in recognition of this, the government is consulting on whether Parliament should play a role in developing or approving Ofweb’s Codes of Practice. That, however, smacks more of sticking plaster than cure.
Impermissible vagueness
Building a regulatory structure on a non-specific notion of harm is not a matter of mere ambiguity, where some word in an otherwise unimpeachable statute might mean one thing or another and the court has to decide which it is. It strays beyond ambiguity into vagueness and gives rise to rule of law issues.
The problem with vagueness was spelt out by the House of Lords in R v Rimmington, citing the US case of Grayned:
If the duty of care is based on an impermissibly vague concept such as ‘harm’, then the legislation has a rule of law problem. It is not necessarily cured by empowering the regulator to clothe the skeleton with codes of practice and interpretations, for three reasons:
This is particularly evident in the White Paper’s discussion of Disinformation. In the context of anti-vaccination the White Paper notes that “Inaccurate information, regardless of intent, can be harmful”.
Having equated inaccuracy with harm, the White Paper contradictorily claims that the regulator and its online intermediary proxies can protect users from harm without policing truth or accuracy:
“We are clear that the regulator will not be responsible for policing truth and accuracy online.” [36]
“Importantly, the code of practice that addresses disinformation will ensure the focus is on protecting users from harm, not judging what is true or not.” [7.31]The White Paper acknowledges that:
“There will be difficult judgement calls associated with this. The government and the future regulator will engage extensively with civil society, industry and other groups to ensure action is as effective as possible, and does not detract from freedom of speech online” [7.31]The contradiction is not something that can be cured by getting some interested parties around a table. It is the cleft stick into which a proposal of this kind inevitably wedges itself, and from which there is no escape.
A third variety of harm, yet more nebulous, can be put under the heading of ‘harm to society’. This kind of harm does not depend on identifying an individual who might be directly harmed. It tends towards pure abstraction, malleable at the will of the interpreting authority.
Harms to society feature heavily in the White Paper, for example: content or activity that:
“threatens our way of life in the UK, either by undermining national security, or by reducing trust and undermining our shared rights, responsibilities and opportunities to foster integration.”Similarly:
“undermine our democratic values and debate”;This kind of prose may befit the soapbox or an election manifesto, but has no place in or near legislation.
“encouraging us to make decisions that could damage our health, undermining our respect and tolerance for each other and confusing our understanding of what is happening in the wider world.”
Democratic deficit
One particular concern is the potential for a duty of care supervised by a regulator and based on a malleable notion of harm to be used as a mechanism to give effect to some Ministerial policy of the day, without the need to obtain legislation.
Thus, two weeks before the release of the White Paper Health Secretary Matt Hancock suggested that anti-vaxxers could be targeted via the forthcoming duty of care.
The White Paper duly recorded, under “Threats to our way of life”, that “Inaccurate information, regardless of intent, can be harmful – for example the spread of inaccurate anti-vaccination messaging online poses a risk to public health.” [1.23]
If a Secretary of State decides that he wants to silence anti-vaxxers, the right way to go about it is to present a Bill to Parliament, have it debated and, if Parliament agrees, pass it into law. The structure envisaged by the White Paper would create a channel whereby an ad hoc Ministerial policy to silence a particular group or kind of speech could be framed as combating an online harm, pushed to the regulator then implemented by its online intermediary proxies. Such a scheme has democratic deficit hard baked into it.
Perhaps in recognition of this, the government is consulting on whether Parliament should play a role in developing or approving Ofweb’s Codes of Practice. That, however, smacks more of sticking plaster than cure.
Impermissible vagueness
Building a regulatory structure on a non-specific notion of harm is not a matter of mere ambiguity, where some word in an otherwise unimpeachable statute might mean one thing or another and the court has to decide which it is. It strays beyond ambiguity into vagueness and gives rise to rule of law issues.
The problem with vagueness was spelt out by the House of Lords in R v Rimmington, citing the US case of Grayned:
"Vagueness offends several important values … A vague law impermissibly delegates basic policy matters to policemen, judges and juries for resolution on an ad hoc and subjective basis, with the attendant dangers of arbitrary and discriminatory application."Whilst most often applied to criminal liability, the objection to vagueness is more fundamental than that. It is a constitutional principle that applies to the law generally. Lord Diplock referred to it in a 1975 civil case (Black-Clawson):
"The acceptance of the rule of law as a constitutional principle requires that a citizen, before committing himself to any course of action, should be able to know in advance what are the legal consequences that will flow from it."Certainty is a particular concern with a law that has consequences for individuals' speech. In the context of a social media duty of care the rule of law requires that users must be able to know with reasonable certainty in advance what of their speech is liable to be the subject of preventive or mitigating action by a platform operator subject to the duty of care.
If the duty of care is based on an impermissibly vague concept such as ‘harm’, then the legislation has a rule of law problem. It is not necessarily cured by empowering the regulator to clothe the skeleton with codes of practice and interpretations, for three reasons:
First, impermissibly vague legislation does not provide a skeleton at all – more of a canvas on to which the regulator can paint at will;
Second, if it is objectionable for the legislature to delegate basic policy matters to policemen, judges and juries it is unclear why it is any less objectionable to do so to a regulator;
Third, regulator-made law is a moveable feast.
All power to the sheriff
From a rule of law perspective undefined harm ought not to take centre stage in legislation.
However if the very idea is to maximise the power and discretion of a regulator, then inherent vagueness in the legislation serves the purpose very well. The vaguer the remit, the more power is handed to the regulator to devise policy and make law.
John Humphrys, perhaps unwittingly, put his finger on it during the Today programme on 8 April 2019 (4:00 onwards). Joy Hyvarinen of Index on Censorship pointed out how broadly Ofcom had interpreted harm in its 2018 survey, to which John Humphrys retorted: “You deal with that by defining [harm] more specifically, surely".
All power to the sheriff
From a rule of law perspective undefined harm ought not to take centre stage in legislation.
However if the very idea is to maximise the power and discretion of a regulator, then inherent vagueness in the legislation serves the purpose very well. The vaguer the remit, the more power is handed to the regulator to devise policy and make law.
John Humphrys, perhaps unwittingly, put his finger on it during the Today programme on 8 April 2019 (4:00 onwards). Joy Hyvarinen of Index on Censorship pointed out how broadly Ofcom had interpreted harm in its 2018 survey, to which John Humphrys retorted: “You deal with that by defining [harm] more specifically, surely".
That would indeed be an improvement. But what interest would a government intent on creating a powerful regulator, not restricted to a static list of in-scope content and behaviour, have in cramping the regulator’s style with strict rules and carefully limited definitions of harm? In this scheme of things breadth and vagueness are not faults but a means to an end.
There is a precedent for this kind of approach in broadcast regulation. The Communications Act 2003 refers to 'offensive and harmful', makes no attempt to define them and leaves it to Ofcom to decide what they mean. Ofcom is charged with achieving the objective:
“that generally accepted standards are applied to the contents of television and radio services so as to provide adequate protection for members of the public from the inclusion in such services of offensive and harmful material”.William Perrin and Professor Lorna Woods, whose work on duties of care has influenced the White Paper, say of the 2003 Act that:
"competent regulators have had little difficulty in working out what harm means" [37].
They endorse Baroness Grender’s contribution to a House of Lords debate in November 2018, in which she asked:
"Why did we understand what we meant by "harm" in 2003 but appear to ask what it is today?"The answer is that in 2003 the legislators did not have to understand what the vague term 'harm' meant because they gave Ofcom the power to decide. It is no surprise if Ofcom has had little difficulty, since it is in reality not 'working out what harm means' but deciding on its own meanings. It is, in effect, performing a delegated legislative function.
Ofweb would be in the same position, effectively exercising a delegated power to decide what is and is not harmful.
Broadcast regulation is an exception from the norm that speech is governed only by the general law. Because of its origins in spectrum scarcity and the perceived power of the medium, it has been considered acceptable to impose stricter content rules and a discretionary style of regulation on broadcast, in addition to the general laws (defamation, obscenity and so on) that apply to all speech.
That does not, however, mean that a similar approach is appropriate for individual speech. Vagueness goes hand in hand with arbitrary exercise of power. If this government had set out to build a scaffold from which to hang individual online speech, it could hardly have done better.
The duty of care that isn’t
Lastly, it is notable that as far as can be discerned from the White Paper the proposed duty of care is not really a duty of care at all.
A duty of care properly so called is a legal duty owed to identifiable persons. They can claim damages if they suffer injury caused by a breach of the duty. Common law negligence and liability under the Occupiers’ Liability Act 1957 are examples. These are typically limited to personal injury and damage to physical property; and only rarely impose a duty on, say, an occupier, to prevent visitors injuring each other. An occupier owes no duty in respect of what visitors to the property say to each other.
The absence in the White Paper of any nexus between the duty of care and individual persons would allow Ofweb’s remit to be extended beyond injury to individuals and into the nebulous realm of harms to society. That, as discussed above, is what the White Paper proposes.
Occasionally a statute creates something that it calls a duty of care, but which in reality describes a duty owed to no-one in particular, breach of which is (for instance) a criminal offence.
An example is s.34 of the Environmental Protection Act 1990, which creates a statutory duty in respect of waste disposal. As would be expected of such a statute, s.34 is precise about the conduct that is in scope of the duty. In contrast, the White Paper proposes what is in effect a universal online ‘Behaving Badly’ law.
Even though the Secretary of State referred in a recent letter to the Society of Editors to “A duty of care between companies and their users”, the ‘duty of care’ described in the White Paper is something quite different from a duty of care properly so called.
The White Paper’s duty of care is a label applied to a regulatory framework that would give Ofweb discretion to decide what user communications and activities on the internet should be deemed harmful, and the power to enlist proxies such as social media companies to sniff and snuff them out, and to take action against an in scope company if it does not comply.
This is a mechanism for control of individual speech such as would not be contemplated offline and is fundamentally unsuited to what individuals do and say online.
[Added visualisation 24 May 2019. Amended 19 July 2019 to make clear that the regulator might be an existing or new body - see Consultation Q.10. Grammatical error corrected 1 March 2023.]