All the signs are that the government will shortly propose
a duty of care on social media platforms aimed at reducing the risk of harm to
users.
DCMS Secretary of State Jeremy Wright wrote recently:
"A world in which harms offline are controlled but the same harms online aren’t is not sustainable now…".
The House of Lords Communications Committee invoked a
similar 'parity principle':
"The same level of protection must be provided online as offline."
Notwithstanding that the duty of care concept is framed as
a transposition of offline duties of care to online, proposals for a social
media duty of care will almost certainly go significantly beyond any comparable
offline duty of care.
When we examine safety-related duties of care owed by operators of offline public spaces to their visitors, we find that they:
(a) are restricted to
objectively ascertainable injury,
(b) rarely impose liability for
what visitors do to each other,
(c) do not impose liability for
what visitors say to each other.
The social media duties of care that have been publicly discussed so far breach all three of these barriers. They relate to subjective harms and are about what users do and say to each other. Nor are they restricted to activities that are unlawful as between the users themselves.
The substantive merits and demerits of any proposed social media duty of care will no doubt be hotly debated. But the likely scope of a duty of care raises a prior rule of law issue. The more broadly a duty of care is framed, the greater the risk that it will stray into impermissible vagueness.
The rule of law objection to vagueness was spelt out by the House of Lords in R v Rimmington, citing the US case of Grayned:
"Vagueness offends several
important values … A vague law impermissibly delegates basic policy matters to
policemen, judges and juries for resolution on an ad hoc and subjective basis,
with the attendant dangers of arbitrary and discriminatory application."
Whilst most often applied to criminal liability, the objection to vagueness is more fundamental than that. It is a constitutional principle that applies to the law generally. Lord Diplock referred to it in a 1975 civil case (Black-Clawson):
"The acceptance of the
rule of law as a constitutional principle requires that a citizen, before
committing himself to any course of action, should be able to know in advance
what are the legal consequences that will flow from it."
Certainty is a particular concern with a law that has consequences for individuals' speech. In the context of a social media duty of care the rule of law requires that users must be able to know with reasonable certainty in advance what of their speech is liable to be the subject of preventive or mitigating action by a platform operator subject to the duty of care.
With all this in mind, I propose a ten point rule of law test by which the government’s proposals, when they appear, may be evaluated. These tests are not about the merits or demerits of the content of any proposed duty of care as such, although of course how the scope and substance of any duty of care is defined will be central to the core rule of law questions of certainty and precision.
These tests are in the nature of a precondition: is the duty of care framed with sufficient certainty and precision to be acceptable as law, particularly bearing in mind potential consequences for individual speech?
It is, for instance, possible for scope to be both broad and clear. That would pass the rule of law test, but might still be objectionable on its merits. But if the scope does not surmount the rule of law threshold of certainty and precision it ought to fall at that first hurdle.
My proposed tests are whether there is sufficient certainty and precision as to:
1. Which
operators are and are not subject to the duty of care.
2. To
whom the duty of care is owed.
3. What
kinds of effect on a recipient will and will not be regarded as harmful.
4. What
speech or conduct by a user will and will not be taken to cause such harm.
5. If
risk to a hypothetical recipient of the speech or conduct in question is
sufficient, how much risk suffices and what are the assumed characteristics of
the notional recipient.
6. Whether
the risk of any particular harm has to be causally connected (and if so how
closely) to the presence of some particular feature of the platform.
7. What
circumstances would trigger an operator's duty to take preventive or mitigating
steps.
8. What
steps the duty of care would require the operator to take to prevent or
mitigate harm (or a perceived risk of harm).
9. How
any steps required by the duty of care would affect users who would not be
harmed by the speech or conduct in question.
10. Whether
a risk of collateral damage to lawful speech or conduct (and if so how great a
risk of how extensive damage), would negate the duty of care.
These tests are framed in terms of harms to individuals. Some
may object that ‘harm’ should be viewed collectively. From a rule of law perspective
it should hardly need saying that constructs such as (for example) harm to
society or harm to culture are hopelessly vague.
One likely riposte to objections of vagueness is that a regulator will be empowered to decide on the detailed rules. Indeed it will no doubt be argued that flexibility on the part of a regulator, given a set of high level principles to work with, is beneficial. There are at least two objections to that.
First, the regulator is not an alchemist. It may be able to produce ad hoc and subjective applications of vague precepts, and even to frame them as rules, but the moving hand of the regulator cannot transmute base metal into gold. Its very raison d'etre is flexibility, discretionary power and nimbleness. Those are a vice, not a virtue, where the rule of law is concerned, particularly when freedom of individual speech is at stake.
Second, if the vice of vagueness is potential for arbitrariness, then it is unclear how Parliament delegating policy matters to an independent regulator is any more acceptable than delegating them to a policeman, judge or jury. It compounds, rather than cures, the vice.
Close scrutiny of any proposed social media duty of care from a rule of law perspective can help ensure that we make good law for bad people rather than bad law for good people.
No comments:
Post a Comment
Note: only a member of this blog may post a comment.