As part of a broader campaign targeting knife
crime the Home Office has published its consultation response on a new
procedure for authorised police officers to issue takedown notices to online platforms (also
now to include search engines). These would require 48-hour removal of
specified illegal weapons content items, on pain of civil penalty sanctions.
The government has also tabled implementing amendments to the
Criminal Law and Policing Bill. These merit close attention. A takedown regime
of this kind inevitably faces some similar issues to those that confronted the Online
Safety Act, particularly in how to go about distinguishing illegal from legal
content online. The Online Safety Act eventually included some fairly tortuous provisions that attempt (whether successfully or not) to meet those challenges.
In contrast, the Policing Bill amendments maintain a judicious silence on some
of the thorniest issues.
Parenthetically, as a policy matter the idea of a system for
giving authoritative illegal content removal notices to platforms is not necessarily
a bad one — so long as the decision to issue a notice is independent and accompanied
by robust prior due process safeguards. Previously,
back in 2019, I suggested a system of specialist independent tribunals that could
be empowered to issue such notices to platforms, as (along with other measures)
a preferable alternative to a ‘regulation by discretionary regulator’ scheme. That idea went nowhere.
But back to the Bill amendments. The most critical aspects
of an official content removal notice regime are how illegality is to be
determined, independence of the notice-giver, prior due process and safeguards.
How do the government’s proposals measure up?
What is unlawful weapons content?
As the Online Safety Act has reminded us, the notion of
illegal content is not as simple a concept as might be thought; nor is making
determinations of illegality.
First off, there is the conceptual problem. Online content as
such cannot be illegal: persons, not content, commit offences. It is only
what someone does with, or by means of, content that can be illegal.
Of course, in everyday parlance we say that zombie knives
are illegal, or that extreme pornography is illegal, and we know what we mean. Statutory
drafting has to be more rigorous: it has to reflect the fact that the offence is
constituted by what is done with the item or the content, with what intent, and
subject to any available defences. It is legally incoherent to say that content
constitutes an offence, without seeking to bridge that gap.
The Online Safety Act attempted to grapple with the conceptual
difficulty of equating content with an offence. The Policing Bill amendments do
not.
For England and Wales new clause NC79 in the Bill amendments asserts that content
is “unlawful weapons content” if it is:
“content that constitutes… an offence under section 1(1) of the
Restriction of Offensive Weapons Act 1959 (offering to sell, hire, loan or give
away etc a dangerous weapon)”
NC79 provides the same for offences under section 1 or 2 of
the Knives Act 1997 (marketing of knives as suitable for combat etc and related
publications), and under section 141(1) of the Criminal Justice Act 1988
(offering to sell, hire, loan or give away etc an offensive weapon).
That is all. The Online Safety Act (Section 59(2)) does kick
off in a similar way, by stipulating that:
“ “Illegal content” means content
that amounts to a relevant offence.”
But (unlike the Policing Bill amendments) section 59(3) goes
on to try to bridge the gap between content and conduct:
“Content consisting of certain
words, images, speech or sounds amounts to a relevant offence if—
(a) the use of the words, images,
speech or sounds amounts to a relevant offence,
(b) the possession, viewing or
accessing of the content constitutes a relevant offence, or
(c) the publication or
dissemination of the content constitutes a relevant offence.”
The Bill amendments contain no equivalent clause.
Determining illegality
Even if the conceptual gap were to be bridged by a similar
amendment clause, that does not mean that illegality is necessarily obvious
just by looking at the online content. Each offence has its own conduct
elements, mental element and any defences that the legislation may stipulate.
Ofcom’s Illegal Content Judgements Guidance under the Online Safety Act devotes
three pages to section 1(1) of the Restriction of Offensive Weapons Act 1959
alone.
Two issues arise with determining illegality: what
information does the authorised police officer need to have in order to be able
to make a determination? How sure does the officer have to be that an offence
has been committed?
The Online Safety Act, recognising that illegality may have
to be considered in a broader context than the online content alone, stipulates
that a service provider’s determination of illegality has to be made in the
light of all relevant information that is reasonably available to the service
provider.
That has some parallels with the duties of investigating
police officers under the Criminal Procedure and Investigations Act 1996: that
all reasonable steps are taken for the purposes of the investigation and, in
particular, that all reasonable lines of inquiry are pursued.
The 1996 Act duty applies to a police investigation conducted
with a view to ascertaining whether a person should be charged with an offence,
or whether a person charged with an offence is guilty of it. However, ascertaining
whether an offence has been committed for the purpose of a content removal
notice is not the same as doing so with a view to making a charging decision.
In order to issue a content removal notice the officer would not need to identify
who had committed the offence – only determine that someone had done so.
Assuming, therefore, that the 1996 Act duty would not apply
if a police officer were considering only whether to issue a content removal
notice, how far would the police have to go in gathering relevant information
before deciding whether an offence had been committed?
There will of course be cases, perhaps even most cases, in
which the illegality may be obvious – for instance from the kind of knife
involved and what has been said online – and the possibility of a defence
remote. But it will not necessarily always be simple, or even possible, to make
an illegality determination simply by looking at the online content alone.
The Online Safety Act (and Ofcom’s guidance on making
illegality judgements) attempts to indicate what information the service
provider should consider in making judgements about illegality. The Bill
amendments are silent on this.
Indeed, the Ofcom Online Safety Act guidance (which regards
law enforcement as a potential ‘trusted flagger’ for this kind of offence)
anticipates that the flagger may provide contextual information: “Reasonably
available information for providers of user-to-user and search services” is:
• The content suspected to be
illegal content.
• Supporting information provided
by any complainant, including that which is provided by any person the provider
considers to be a trusted flagger.
The silence of the Bill amendments on this topic is all the
more eloquent when we consider that nowhere in the procedures – from content
removal notice through to appeal against a civil penalty notice – is there any
provision for the person whose content is to be removed to be notified or given
the opportunity to make representations.
Comparison with the Online Safety Act
The government emphasises, in its Consultation Response para
6.7, that:
“The proposed measure sits
alongside, and does not conflict with, the structures established through the
Online Safety Act 2023.”
Strictly speaking that is right: a notice from a police
officer under the Bill amendments could have three separate functions or
effects:
-
Constitute a notice requiring 48-hour takedown
under the new provisions.
-
Fix the service provider with awareness of
illegality for the purpose of the OSA reactive duty under S.10(3)(b).
-
Fix the service provider with knowledge of
illegality for the purpose of the hosting liability shield derived from the
eCommerce Directive.
Since these are three separate, parallel structures, it is
correct that they do not conflict[1].
Nevertheless, they are significantly different from each other. As well as the
differences from the Online Safety Act already outlined, the role of law
enforcement under the Bill amendments is significantly different.
In particular, although under the Online Safety Act law
enforcement may be considered to be a trusted flagger, Ofcom cautions that:
“A provider is not required to
accept the opinions of a third party as to whether content is illegal content.
Only a judgment of a UK court is binding on it in making this determination. In
all other cases, it will need to take its own view on the evidence, information
and any opinions provided.”
Therein lies the biggest difference between the Online
Safety Act and the Bill amendments. Under the Bill amendments, subject to the
review procedure outlined below, a service provider is required to act
on the opinion of the police.
The government plans that the content removal system will be
operated by a new policing unit, which will be responsible for issuing removal
notices. That is presumably reflected (in part) by the Bill amendment provision
that a content removal notice has to be given by an officer authorised by the
Director General of the National Crime Agency or the chief officer of the
relevant police force.
How sure that an offence has been committed?
A related aspect of determining illegality is how sure the
person making the decision has to be that the content is illegal. The Online Safety Act stipulates that the
provider has to treat the content as illegal if it has ‘reasonable grounds to
infer’ that the content is illegal. ‘Reasonable grounds to infer’ is a
relatively low threshold, which has given rise to concerns that legitimate content will
inevitably be removed with consequent risk of European Convention on Human Rights incompatibility.
The Bill amendments take a different approach: the police
officer making the decision must be ‘satisfied’ that the content is unlawful
weapons content. ‘Satisfied’ presumably is not intended to be a wholly
subjective assessment. But if not, what degree of confidence is implicit in
‘satisfied’? If the police officer has residual doubts, or has insufficient
information to make up his or her mind, could the officer be ‘satisfied’ that
the content amounts to an offence? Equally, it probably does not mean
‘satisfied beyond all reasonable doubt’.
The Online Safety Act provides that a service provider does
not have to take into account the possibility of a defence unless it has
reasonable grounds to infer that a defence may be successfully relied upon. By
contrast, under the Bill amendments it seems likely that the police officer
would always have to be satisfied that no defence was available.
Safeguards
The government has sought to address the risk of ill-founded
notices by means of a review mechanism. The content removal notice has to explain the
police officer’s reasons for considering that the content is unlawful weapons
content. The service provider can
request review of a notice by a more senior officer. The reviewing officer must
then give a decision notice, setting out the outcome of the review and giving
reasons. The government has said that it:
“…believes that the review
process designed within the proposal adequately addresses online companies
concerns with cases where it would be difficult to determine the illegality of
content.” (Consultation Response, [6.8])
The review process, however, sheds
no light on how much contextual information gathering by police officers is
contemplated, nor on the degree of confidence implicit in being ‘satisfied’. It
contains no element of independent third party review, nor any opportunity for
the person whose content is to be removed to make representations.
That said, the procedure could perhaps be fleshed out by
guidance to law enforcement that the Secretary of State may (but is not
required to) issue under NC84.
Underlying all these considerations is the matter of ECHR
compatibility. The lower or more subjective the threshold for issuing a notice,
the less the predictability of the process or outcome, and the fewer or weaker the
safeguards against arbitrary or erroneous decision-making, then the greater the
likelihood of ECHR incompatibility.
It might be said against all of this that of course the
police would only issue a content removal notice if was obvious from the online
content itself that an offence was being committed. If that were the intention, might
it be preferable to make that explicit and write a “manifest illegality”
standard into the legislation?
Does it matter?
It could well be questioned why any of this matters. Who really cares if a few less knives appear online because content is wrongly taken down? That kind of argument is depressingly easy to make where impingements on freedom of expression are concerned. Thus in a different context, what does it really matter if, in our quest to root out the evils in society, we sacrifice due process and foreseeability to flexibility and remove a few too many tasteless jokes, insulting tweets, offensive posts, shocking comments, wounding parodies, disrespectful jibes about religion or anything else that thrives in the toxic online hinterland of the nearly illegal?
Opinions on that will differ. For me, it matters because the
rule of law matters. Due process provides the opportunity to be
heard. It matters that you should be able to predict in advance, with
reasonable certainty, whether something that you are contemplating posting
online is liable to be taken down as the result of official action (or, for
that matter, the action of a platform seeking to comply with a legal or
regulatory duty).
If you cannot do that, you are at the mercy of arbitrary
exercise of state power. It is knives today, but who knows what tomorrow (we can, however, be
sure that once one 48-hour takedown regime is enacted others will follow). Abandon the rule of law to ad hoc power and,
as Robert Bolt had Sir Thomas More declaim to William Roper in A Man For All
Seasons:
“…do you really think you could
stand upright in the winds that would blow then? Yes, I'd give the Devil
benefit of law, for my own safety's sake!”.
However skilled a dedicated police unit may be, expertise is
no substitute for due process, safeguards and independent adjudication.
Otherwise, why would we bother with courts at all? The fact that content,
rather than a person, is condemned is not, I would suggest, a good reason to skimp on rule of law principles.
It may be said that the Bill amendments provide for recourse
to the courts. They do, but only once matters have got as far as a civil
penalty notice imposing a fine for non-compliance; and they concern only the
platform, not the person who posted the content. That is not the same as due
process, safeguards or independent review at the outset of the decision-making process.
Extraterritoriality
To finish with a more technical matter: extraterritoriality.
The Online Safety Act, although fairly aggressive in its assertion of
jurisdiction, did recognise the need to establish some connection with the
UK in order for a U2U or search service to fall within its territorial scope. Thus
Section 4 of the OSA sets out a series of criteria to determine whether a
service is UK-linked.
The Bill amendments contain no such provision. On the face
of it a police officer could serve notices under the Act by email and (in the event
of non-compliance) impose civil penalties on any service provider anywhere in
the world, regardless of whether they have any connection with the UK at all.
If that is what is intended, it would be an extraordinary piece of
jurisdictional overreach.
That would also (presumably) bring into play delicate
judgements by authorised police officers, when considering whether to serve a content
removal notice, as to whether an activity on a platform that had no connection
with the UK amounted to an offence within the UK. That is a matter of the
territorial scope of the underlying UK offence. The Online Safety Act circumvents
questions of that kind by, for the purpose of service provider duties, instructing
the service provider to disregard territorial considerations:
“For the purposes of determining
whether content amounts to an offence, no account is to be taken of whether or
not anything done in relation to the content takes place in any part of the
United Kingdom.”
The Bill amendments are silent on these difficult jurisdictional issues.
[1] This is on the basis that the notice regime would fall within the eCommerce Directive exception for specific court or administrative authority orders to terminate an infringement. That would depend on whether the police are properly regarded as an administrative authority. If not, it could be argued that the Policing Bill amendments in substance are inconsistent with the eCommerce Directive hosting liability shield to which, as a matter of policy, the government ostensibly continues to adhere: "The government is committed to upholding the liability protections now that the transition period has ended." (The eCommerce Directive and the UK, last updated 18 January 2021).
No comments:
Post a Comment
Note: only a member of this blog may post a comment.