The
European Commission recently published a Communication on Tackling Illegal Content Online. Its subtitle, Towards an enhanced responsibility of online platforms, summarises the theme: persuading online intermediaries, specifically social media
platforms, to take on the job of policing illegal content posted by their users. [See also the Commission's follow-up Recommendation on Measures to Effectively Tackle Illegal Content Online, published 1 March 2018.]
The Commission wants the platforms to perform eight main functions (my selection and emphasis):
- Online platforms should be able to take swift decisions on action about illegal content without a court order or administrative decision, especially where notified by a law enforcement authority. (Communication, para 3.1)
- Platforms should prioritise removal in response to notices received from law enforcement bodies and other public or private sector 'trusted flaggers'. (Communication, para 4.1)
- Fully automated removal should be applied where the circumstances leave little doubt about the illegality of the material (such as where the removal is notified by law enforcement authorities). (Communication, para 4.1)
- In a limited number of cases platforms may remove content notified by trusted flaggers without verifying legality themselves. (Communication, para 3.2.1)
- Platforms should not limit themselves to reacting to notices but adopt effective proactive measures to detect and remove illegal content. (Communication, para 3.3.1)
- Platforms should take measures (such as account suspension or termination) which dissuade users from repeatedly uploading illegal content of the same nature. (Communication, para 5.1)
- Platforms are strongly encouraged to use fingerprinting tools to filter out content that has already been identified and assessed as illegal. (Communication, para 5.2)
- Platforms should report to law enforcement authorities whenever they are made aware of or encounter evidence of criminal or other offences. (Communication, para 4.1)
It can be seen that the Communication does not stop with policing content. The Commission wants the
platforms to act as detective, informant, arresting officer, prosecution, defence,
judge, jury and prison warder: everything from sniffing out content and deciding
whether it is illegal to locking the impugned material away from public view
and making sure the cell door stays shut. When platforms aren’t doing the
detective work themselves they are expected to remove users’ posts in response to a corps of ‘trusted flaggers’, sometimes without reviewing the alleged illegality themselves. None of this with a real judge or jury
in sight.
“The EU will … c) Raise awareness
among judges, law enforcement officials, staff of human rights commissions and
policymakers around the world of the need to promote international standards, including standards protecting
intermediaries from the obligation of blocking Internet content without prior
due process.” (emphasis added)
A leaked earlier draft of the Communication referenced the Guidelines. The reference was removed from the final version. It would certainly have been embarrassing for the
Communication to refer to that document. Far from “protecting intermediaries
from the obligation of blocking Internet content without prior due process”,
the premise of the Communication is that intermediaries should remove
and filter content without prior due process.
The Commission has embraced the theory that platforms ought to act as gatekeepers
rather than gateways, filtering the content that their users upload and read.
Article 15, where are
you?
Not only is the Communication's approach inconsistent with the EU’s Freedom of
Expression Guidelines, it challenges a longstanding and deeply embedded piece
of EU law. Article 15 of the ECommerce Directive has been on the statute book
for nearly 20 years. It prohibits Member States from imposing general
monitoring obligations on online intermediaries. Yet the Communication says
that online platforms should “adopt effective proactive measures to detect and
remove illegal content online and not only limit themselves to reacting to
notices which they receive”. It
“strongly encourages” online platforms to step up cooperation and investment
in, and use of, automatic detection technologies.
A similar controversy around conflict with Article 15 has been stirred up by Article 13 of the Commission’s proposed Digital Single Market Copyright Directive, also in the context of filtering.
Although the measures that the Communication urges on
platforms are voluntary (thus avoiding a direct clash with Article 15) that is
more a matter of form than of substance. The velvet glove openly brandishes a knuckleduster:
the explicit threat of legislation if the platforms do not co-operate.
“The Commission will continue
exchanges and dialogues with online platforms and other relevant stakeholders.
It will monitor progress and assess whether additional measures are needed, in
order to ensure the swift and proactive detection and removal of illegal
content online, including possible legislative measures to complement the
existing regulatory framework. This work will be completed by May 2018.”
The platforms have been set their task and the big stick
will be wielded if they don't fall into line. It is reminiscent of the UK government’s
version of co-regulation:
“Government defines the public policy objectives that
need to be secured, but tasks industry to design and operate self-regulatory
solutions and stands behind industry ready to take statutory action if
necessary.” (e-commerce@its.best.uk,
Cabinet Office 1999.)
So long as those
harnessed to the task of implementing policy don’t kick over the traces, the uncertain business of persuading a democratically elected legislature to enact a law is
avoided.
The Communication displays no enthusiasm for Article 15. It devotes nearly two
pages of close legal analysis to explaining why, in its view, adopting
proactive measures would not deprive a platform of hosting protection under
Article 14 of the Directive. Article 15,
in contrast, is mentioned in passing but not discussed.
Such tepidity is the more noteworthy when the balance between
fundamental rights reflected in Article 15 finds support in, for instance, the
recent European Court of Human Rights judgment in Tamiz v Google ([84] and [85]). Article 15 is not something that can or should be lightly ignored or manoeuvred around.
For all its considerable superstructure - trusted flaggers, certificated standards, reversibility safeguards, transparency and the rest - the Communication lacks solid foundations. It has the air of a castle built on a chain of quicksands: presumed illegality, lack of prior due process at source, reversal of the presumption against prior restraint, assumptions that illegality is capable of precise computation, failure to grapple with differences in Member States' laws, and others.
Whatever may be the appropriate response to illegal content on the internet – and no one should pretend that this is an easy issue – it is hard to avoid the conclusion that the Communication is not it.
The Communication has already come in for criticism (here, here, here, here and here). At risk of repeating points already well made, this post will take an issue by issue dive into its foundational weaknesses.
To aid this analysis I have listed 30 identifiable prescriptive elements of the Communication. They are annexed as the Communication's Action Points (my label). Citations in the post are to that list and to paragraph numbers in the Communication.
Index to Issues and Annex
Presumed illegal
Underlying much of the Communication’s approach to tackling
illegal content is a presumption that, once accused, content is illegal until
proven innocent. That can be found in:
- its suggestion that content should be removed automatically
on the say-so of certain trusted flaggers (Action Points 8 and 15, paras 3.2.1
and 4.1);
- its emphasis on automated detection technologies
(Action Point 14, para 3.3.2);
- its greater reliance on corrective safeguards
after removal than preventative safeguards before (paras 4.1, 4.3);
- the suggestion that platforms’ performance
should be judged by removal rates (the higher the better) (see More is better, faster is best below);
- the suggestion that in difficult cases the
platform should seek third party advice instead of giving the benefit of the
doubt to the content (Action Point 17, para 4.1);
- its encouragement of quasi-official databases of
‘known’ illegal content, but without a legally competent determination
of illegality (Action Point 29, para 5.2).
Taken together these add up to a presumption of illegality, implemented by prior restraint.
In one well known case a tweet provoked a
criminal prosecution, resulting in a magistrates’ court conviction. Two years
later the author was acquitted on appeal. Under the trusted flagger system the
police could notify such a tweet to the platform at the outset with every expectation
that it would be removed, perhaps automatically (Action Points 8 and 15, paras
3.2.1, 4.1). A tweet ultimately found to be legal would most probably have been removed from public view,
without any judicial order.
Multiply that thousands of times, factor in that speedy removal
will often be the end of the matter if no prosecution takes place, and we have
prior permanent restraint on a grand scale.
Against that criticism it could be said that the proposed
counter notice arrangements provide the opportunity to reverse errors and
doubtful decisions. However by that time
the damage is done. The default has
shifted to presumed illegality, inertia takes over and many authors will simply
shrug their shoulders and move on for the sake of a quiet life.
If the author does object, the material
would not automatically be reinstated. The Communication suggests that reinstatement
should take place if the counter-notice provides reasonable grounds to consider
that removed content is not illegal. The burden has shifted to the author to
establish innocence.
If an author takes an autonomous decision to remove
something that they have previously posted, that is their prerogative. No
question of interference with freedom of speech arises. But if the suppression results from a
state-fostered system that institutionalises removal by default and requires
the author to justify its reinstatement, there is interference with freedom of
speech of both the author and those who would otherwise have been able to read
the post. It is a classic chilling effect.
Presumed illegality does not feature in any set of freedom
of speech principles, offline or online.
Quite the opposite. The traditional presumption against prior restraint,
forged in the offline world, embodies the principle that accused speech should have
the benefit of the doubt. It should be allowed to stay up until proved illegal.
Even then, the appropriate remedy may only be damages or criminal sanction, not
necessarily removal of the material from public view. Only exceptionally should speech be withheld
from public access pending an independent, fully considered, determination of
legality with all due process.
Even in these days of interim privacy injunctions routinely
outweighing freedom of expression, presumption against prior restraint remains
the underlying principle. The European
Court of Human Rights observed in Spycatcher that “the dangers inherent in prior restraints are such that they
call for the most careful scrutiny on the part of the Court”.
In Mosley v
UK the ECtHR added a gloss that prior restraints may be "more readily justified
in cases which demonstrate no pressing need for immediate publication and in
which there is no obvious contribution to a debate of general public interest".
Nevertheless the starting point remains that the prior restraint requires case by case justification. All the more so for automated
prior restraint on an industrial scale with no independent consideration of the
merits.
A keystone of the Communication is the proposed system of
'trusted flaggers' who offer particular expertise in notifying the presence of
potentially illegal content: “specialised entities with specific expertise in
identifying illegal content, and dedicated structures for detecting and
identifying such content online” (para 3.2.1).
Trusted flaggers “can be expected to bring their expertise
and work with high quality standards, which should result in higher quality
notices and faster take-downs” (para 3.2.1).
Platforms would be expected to fast-track notices from trusted flaggers.
The Commission proposes to explore with industry the potential of standardised
notification procedures.
Trusted flaggers would range from law enforcement bodies to
copyright owners. The Communication names the Europol Internet Referral Unit
for terrorist content and the INHOPE network of reporting hotlines for child
sexual abuse material as trusted flaggers. It also suggests that civil society
organisations or semi-public bodies are specialised in the reporting of illegal
online racist and xenophobic material.
The emphasis is notably on practical expertise rather than
on legal competence to determine illegality. This is especially significant for
the proposed role of law enforcement authorities as trusted flaggers. The
police detect crime, apply to court for arrest or search warrants, execute
them, mostly hand over cases to prosecutors and give evidence in court. They may
have practical competence in combating illegal activities, but they do not have
legal competence to rule on legality or illegality (see below Legal competence v practical competence).
The system under which the Europol IRU sends
takedown notices to platforms is illustrative. Thousands - in the case of the UK's similar Counter Terrorism Internet Referral Unit (CTIRU) hundreds of thousands - of
items of content are taken down on the say-so of the police, with safeguards
against overreaching dependent on the willingness and resources of the platforms
to push back.
It is impossible to know whether such systems are ‘working’
or not, since there is (and is meant to be) no public visibility and evaluation
of what has been removed.
As at July 2017 the CTIRU had removed some 270,000 items since 2010. A recent freedom of information request by the Open Rights Group for a list of the kind of “statistical records, impact assessments and evaluations created and kept by the Counter Terrorism Internet Referrals Unit in relation to their operations” was rejected on grounds that it would compromise law enforcement by undermining the operational effectiveness of the CTIRU and have a negative effect on national security.
In the UK PIPCU (the Police Intellectual Property Crime Unit) is the specialist intellectual property
enforcement unit of the City of London Police. One of PIPCU’s activities is to
write letters to domain registrars asking them to suspend domains used for
infringing activities. Its registrar campaign led to this reminder from a US
arbitrator of the distinction between the police and the courts:
“To permit a registrar of record
to withhold the transfer of a domain based on the suspicion of a law
enforcement agency, without the intervention of a judicial body, opens the
possibility for abuse by agencies far less reputable than the City of London
Police. Presumably, the provision in the Transfer Policy requiring a court
order is based on the reasonable assumption that the intervention of a court
and judicial decree ensures that the restriction on the transfer of a domain
name has some basis of “due process” associated with it.”
A law enforcement body not subject to independent due
process (such as applying for a court order) is at risk of overreach,
whether through over-enthusiasm for the cause of crime prevention, succumbing
to groupthink or some other reason. Due process at source is designed to
prevent that. Safeguards at the receiving end do not perform the same role of keeping
official agencies in check.
The Communication suggests (Action Point 8, 3.2.1) that in
‘a limited number of cases’ platforms could remove content notified by certain trusted flaggers without verifying legality
themselves.
What might these 'limited cases'
be? Could it apply to both state and
private trusted flaggers? Would it apply to any kind of content and any kind of
illegality, or only to some? Would it apply only where automated systems are in
place? Would it apply only where a court has authoritatively determined that
the content is illegal? Would it apply only to repeat violations? The Communication does not tell us. Where it would apply, absence of due process at source takes on greater significance.
Would it perhaps cover the same ground as Action Point 15, under
which fully automated deletion should be applied where "circumstances leave
little doubt about the illegality of the material", for example (according to the Communication) when notified by
law enforcement authorities?
When we join the dots of the various parts of the
Communication the impression is of significant expectation that instead of considering the
illegality of content notified by law enforcement, platforms may assume that it is illegal and automatically
remove it.
The Communication contains little to ensure that trusted
flaggers make good decisions. Most of the safeguards are post-notice and post-removal and consist of
procedures to be implemented by the platforms. As to specific due process obligations on the notice giver, the Communication is silent.
The contrast between this and the Freedom of Expression
Guidelines noted earlier is evident. The Guidelines emphasise prior due process. The Communication
emphasises ex post facto remedial
safeguards to be put in place by the platforms. Those are expected to compensate for absence of due process on the part of the authority giving notice in the first place.
Legal competence v
practical competence
The Communication opens its section on notices from state
authorities by referring to courts and competent authorities able to issue
binding orders or administrative decisions requiring online platforms to remove
or block illegal content. Such bodies, it may reasonably be assumed,
would incorporate some element of due process in their decision-making prior to the issue of
a legally binding order.
However we have seen that the Communication abandons that
limitation, referring to ‘law enforcement and other competent authorities’. A
‘competent authority’ is evidently not limited to bodies embodying due process
and legally competent to determine illegality.
It includes bodies such as
the police, who are taken to have practical competence through familiarity with the
subject matter. Thus in the Europol EU Internet
Referral Unit “security experts assess and refer terrorist content to
online platforms”.
It is notable that this section in the earlier
leaked draft did not survive the final edit:
“In the EU, courts and national
competent authorities, including law enforcement authorities, have the competence to establish the
illegality of a given activity or information online.” (emphasis added)
Courts are legally competent to establish legality or
illegality, but law enforcement bodies are not.
In the final version the Commission retreats from the overt
assertion that law enforcement authorities are competent to establish
illegality:
“In the EU, courts and national
competent authorities, including law enforcement authorities, are competent to
prosecute crimes and impose criminal sanctions under due process relating to
the illegality of a given activity or information online.”
However by rolling them up together this passage blurs the distinction between the
police, prosecutors and the courts.
If the police are to be regarded as trusted
flaggers, one can see why the Communication might want to treat them as competent to establish
illegality. But no amount of tinkering
with the wording of the Communication, or adding vague references to due
process, can disguise the fact that the police are not the courts.
Even if we take practical competence of trusted flaggers as
a given, the Communication does not discuss the standard to which trusted
flaggers should evaluate content. Would the threshold be clearly illegal,
potentially illegal, arguably illegal, more likely than not to be illegal, or
something else? The omission is striking when compared with the carefully
crafted proposed standard for reinstatement: "reasonable grounds to
consider that the notified activity or information is not illegal".
The elision of legal competence and practical competence is linked to the lack of insistence on prior due process. In an unclear case a
legally competent body such as a court can make a binding determination one way
or the other that can be relied upon. A
trusted flagger cannot do so, however expert and standards compliant it may
be. The lower the evaluation threshold,
the more the burden is shifted on to the platform to make an assessment which
it is not competent legally, and unlikely to be competent practically, to do. Neither the
trusted flagger nor the platform is a court or a substitute for a
court.
Due process v quality standards
The Commission has an answer to absence of due process at
source. It suggests that trusted flaggers could achieve a kind of
quasi-official qualification:
“In order to ensure a high
quality of notices and faster removal of illegal content, criteria based
notably on respect for fundamental rights and of democratic values could be
agreed by the industry at EU level.
This can be done through
self-regulatory mechanisms or within the EU standardisation framework, under
which a particular entity can be considered a trusted flagger, allowing for
sufficient flexibility to take account of content-specific characteristics and
the role of the trusted flagger.” (para 3.2.1)
The Communication suggests criteria such as internal
training standards, process standards, quality assurance, and legal safeguards
around independence, conflicts of interest, protection of privacy and personal
data. These would have sufficient
flexibility to take account of content-specific characteristics and the role of
the trusted flagger. The Commission intends to explore, in particular in dialogues with the relevant stakeholders, the
potential of agreeing EU-wide criteria for trusted flaggers.
The Communication's references to internal training
standards, process standards and quality assurance could
have been lifted from the manual for a food processing plant. But what we write online is not susceptible of precise measurement of size, temperature, colour and
weight. With the possible exception of illegal images of children (but exactly what
is illegal still varies from one country to another) even the clearest rules
are fuzzy around the edges. For many speech laws, qualified with exceptions and defences, the lack of precision extends further. Some of the most
controversial such as hate speech and terrorist material are inherently vague.
Those are among the most highly emphasised targets of the Commission’s
scheme.
A removal scheme cannot readily be founded on the assumption
that legality of content can always (or even mostly) be determined like grading
frozen peas, simply by inspecting the item - whether the inspector be a human
being or a computer.
No amount of training – of computers or people - can turn a
qualitative evaluation into a precise scientific measurement. Even for a well-established takedown practice
such as copyright, it is unclear how a computer could reliably identify exceptions such as parody or quotation when human beings themselves argue about their scope and application.
The appropriateness of removal based on a mechanical assessment of illegality is also put in question by the recent European Court of Human Rights decision in Tamiz v Google, a defamation case. The court emphasised that a claimant’s Article 8 rights are engaged only where the material reaches a minimum threshold of seriousness. It may be disproportionate to remove trivial comments, even if they are technically unlawful. A removal system that asks only “legal or illegal?” may not be posing all the right questions.
Manifest illegality v contextual information
The broader issue raised in the previous section is that knowledge of content (whether by the notice giver or the receiving platform) is not the same as knowledge of illegality, even if the question posed is the simple "legal or illegal?".
As Eady J. said in Bunt v Tilley in relation to defamation: “In order to be able to characterise something as ‘‘unlawful’’ a person would need to know something of the strength or weakness of available defences". Caselaw under the ECommerce Directive contains examples of platforms held not to be fixed with knowledge of illegality until a considerable period after the first notification was made. The point applies more widely than defamation.
The Commission is aware that this is an issue:
“In practice, different content
types require a different amount of contextual information to determine the
legality of a given content item. For instance, while it is easier to determine
the illegal nature of child sexual abuse material, the determination of the
illegality of defamatory statements generally requires careful analysis of the
context in which it was made.” (para 4.1)
However, to identify the problem is not to solve it. Unsurprisingly, the Communication struggles to find a consistent approach. At one point it advocates adding humans into the loop of
automated illegality detection and removal by platforms:
“This human-in-the-loop principle is, in
general, an important element of automatic procedures that seek to determine
the illegality of a given content, especially in areas where error rates are
high or where contextualisation is necessary.” (para 3.3.2)
It suggests improving automatic re-upload filters,
giving the examples of the existing ‘Database of Hashes’ in respect of
terrorist material, child sexual abuse material and copyright. It says that
could also apply to other material flagged by law enforcement authorities. But
it also acknowledges their limitations:
“However, their effectiveness
depends on further improvements to limit erroneous identification of content
and to facilitate context-aware decisions, as well as the necessary
reversibility safeguards.” (para 5.2)
In spite of that acknowledgment, for repeated infringement the
Communication proposes that: “Automatic stay-down procedures should allow for
context-related exceptions”. (para 5.2)
Placing ‘stay-down’ obligations on platforms is in any event a controversial area, not least because the monitoring required is likely
to conflict with Article 15 of the ECommerce Directive.
Inability to deal adequately with contextuality is no surprise. It raises serious questions of
consistency with the right of freedom of expression. The SABAM/Scarlet and SABAM/Netlog
cases in the CJEU made clear that filtering carrying a risk of wrongful
blocking constitutes an interference with freedom of expression. It is by no
means obvious, taken with the particular fundamental rights concerns raised by
prior restraint, that that can be cured by putting "reversibility
safeguards" in place.
Illegality on the
face of the statute v prosecutorial discretion
Some criminal offences are so broadly drawn that, in the UK
at least, prosecutorial discretion becomes a significant factor in mitigating
potential overreach of the offence. In some cases prosecutorial guidelines have
been published (such as the English Director of Public Prosecution’s Social
Media Prosecution Guidelines).
Take the case of terrorism.
The UK government argued before the Supreme Court, in a case about a
conviction for dissemination of terrorist publications by uploading videos to
YouTube (R v Gul [2013] UKSC 64, [30]),
that terrorism was deliberately very widely defined in the statute, but
prosecutorial discretion was vested in the DPP to mitigate the risk of
criminalising activities that should not be prosecuted. The DPP is
independent of the police.
The Supreme Court observed that this amounted to saying that
the legislature had “in effect delegated to an appointee of the executive,
albeit a respected and independent lawyer, the decision whether an activity
should be treated as criminal for the purpose of prosecution”.
It observed that that risked undermining the rule of law
since the DPP, although accountable to Parliament, did not make open,
democratically accountable decisions in the same way as Parliament. Further,
“such a device leaves citizens unclear as to whether or not their actions or
projected actions are liable to be treated by the prosecution authorities as
effectively innocent or criminal - in this case seriously criminal”. It
described the definition of terrorism as “concerningly wide” ([38]).
Undesirable as this type of lawmaking may be, it exists. An
institutional removal system founded on the assumption that content can
identified in a binary way as ‘legal/not legal’ takes no account of broad definitions of criminality intended to be mitigated by prosecutorial
discretion.
The existing formal takedown procedure under the UK’s
Terrorism Act 2006, empowering a police constable to give a notice to an
internet service provider, could give rise to much the same concern. The Act requires that the notice should
contain a declaration that, in the opinion of the constable giving it, the
material is unlawfully terrorism-related. There is no independent mitigation mechanism
of the kind that applies to prosecutions.
In fact the formal notice-giving procedure under the 2006
Act appears never to have been used, having been supplanted by the voluntary procedure
involving the Counter Terrorism Internet Referrals Unit (CTIRU) already described.
The Communication opens its second paragraph with the now familiar
declamation that ‘What is illegal offline is also illegal online’. But if that is the desideratum, are the online
protections for accused speech comparable with those offline?
The presumption against prior restraint applies to
traditional offline publishers who are indubitably responsible for their own
content. Even if one holds the view that platforms ought to be responsible for
users’ content as if it were their own, equivalence with offline does not lead
to a presumption of illegality and an expectation that notified material will
be removed immediately, or even at all.
Filtering and takedown obligations introduce a level of prior
restraint online that does not exist offline. They are exercised against
individual online self-publishers and readers (you and me) via a side door: the platform.
Sometimes this will occur prior to publication, always prior to an independent
judicial determination of illegality (cf Yildirim v Turkey [52]).
The trusted flagger system would institutionalise banned lists maintained
by police authorities and government agencies. Official lists of banned content may sound like tinfoil
hat territory. But as already noted, platforms will be encouraged to operate automatic removal
processes, effectively assuming the illegality of content notified by such
sources (Action Points 8 and 15).
The Commission cites the Europol IRU as a model. The Europol IRU appears not even to restrict itself to notifying
illegal content. In reply to an MEP’s
question earlier this year the EU Home Affairs Commissioner said:
“The Commission does not have
statistics on the proportion of illegal content referred by EU IRU. It should
be noted that the baseline for referrals is its mandate, the EU legal framework
on terrorist offences as well as the terms and conditions set by the companies.
When the EU IRU scans for terrorist material it refers it to the company where
it assesses it to have breached their terms and conditions.”
This blurring of the distinction between notification on
grounds of illegality and notification for breach of platform terms and
conditions is explored further below (Illegality v terms of service).
A recent Policy Exchange paper The New Netwar makes another suggestion. An expert-curated data
feed of jihadist content (it is unclear whether this would go further than
illegal content) that is being shared should be provided to social media
companies. The paper suggests that this would be overseen by the government’s
proposed Commission for Countering Extremism, perhaps in liaison with GCHQ.
It may be said that until the internet came along
individuals did not have the ability to write for the world, certainly not in
the quantity that occurs today; and we need new tools to combat online threats.
If the point is that the internet is in fact different from
offline, then we should carefully examine those differences, consider where
they may lead and evaluate the consequences. Simply reciting the mantra that
what is illegal offline is illegal online does not suffice when removal mechanisms
are proposed that would have been rejected out of hand in the offline world.
Some may look back fondly on a golden age in which no-one
could write for the public other than via the moderating influence of an
editor’s blue pencil and spike. The
internet has broken that mould. Each one of us can speak to the world. That is
profoundly liberating, profoundly radical and, to some no doubt, profoundly
disturbing. If the argument is that more speech demands more ways of
controlling or suppressing speech, that would be better made overtly than behind the
slogan of offline equivalence.
More is better,
faster is best
Another theme that underlies the Communication is the focus
on removal rates and the suggestion that more and faster removal is better.
But what is a ‘better’ removal rate? A high removal rate could indicate that trusted flaggers were notifying only content that is definitely unlawful. Or it could mean that platforms were taking
notices largely on trust. Whilst ‘better’ removal rates might reflect better
decisions, pressure to achieve higher and speedier removal rates may equally lead
to worse decisions.
There is a related problem with measuring speed of removal.
From what point do you measure the time taken to remove? The obvious answer is,
from when a notice is received. But although conveniently ascertainable, that
is arbitrary. As discussed above,
knowledge of content is not the same as knowledge of illegality.
Not only is a platform not at risk of liability until it has
knowledge of illegality, but if the overall objective of the Commission's
scheme is removal of illegal content (and only of illegal content) then
the platform positively ought not to take down material unless and until it knows
for sure that the material is illegal. While
the matter remains in doubt the material should stay up.
Any meaningful measure of removal speed should look at the
period elapsed after knowledge was acquired, in addition to or instead of the period from first notification. A compiler
of statistics should look at the history of each removal (or
non-removal) and make an independent evaluation of when (if at all) knowledge
of the illegality was acquired – effectively repeating and second-guessing the
platform’s own evaluation. That exercise becomes more challenging if platforms are taking notices on trust and removing without performing their own evaluation.
This is a problem created by the Commission’s desire to
convert a liability shield (the ECommerce Directive) into a removal tool.
Liability shield v removal tool
At the heart of the Communication (and of the whole ‘Notice
and Action’ narrative that the Commission has been promoting since 2010) is an
attempt to reframe the hosting liability shield of the ECommerce Directive as a
positive obligation on the host to take action when it becomes aware of
unlawful content or behaviour, usually when it receives notice.
The ECommerce Directive incentivises, but does not obligate,
a host to take action. This is a nuanced
but important distinction. If the host does not remove the content expeditiously after becoming aware of
illegality then the consequence is that it loses the protection of the shield. But a decision not to remove content after receiving notice does not of itself give rise to any obligation to
take down the content. The host may (or
may not) then become liable for the user’s unlawful activity (if indeed it be
unlawful). That is a question of the reach of each Member State’s underlying law, which may
vary from one kind of liability to another and from one Member State to
another.
This structure goes some way towards respecting due process
and the presumption against prior restraint (see above). Even if the more
realistic scenario is that a host will be over-cautious in its decisions, a
host remains free to decide to leave the material up and bear the risk of liability
after a trial or the risk of an interim court order requiring it to be removed. It has the opportunity to 'publish and be damned'.
The Communication contends that "at EU level the
general legal framework for illegal content removal is the ECommerce
Directive". The Communication finds
support for its ‘removal framework’ view in Recital (40) of the Directive:
“this Directive should constitute
the appropriate basis for the development of rapid and reliable procedures for
removing and disabling access to illegal information"
The Recital goes on to envisage voluntary agreements
encouraged by Member States.
However the Directive is not in substance a removal
framework. It is up to the notice-giver to provide sufficient detail to
fix the platform with knowledge of the illegality. A platform has no obligation to enquire
further in order to make a fully informed Yes/No decision. It can legitimately decline to take action on the basis of insufficient information.
The
Communication represents a subtle but significant shift from intermediary liability
rules as protection from liability to a regime in which platforms are positively expected to act as arbiters, making fully informed decisions on legality and removal (at least when they are not being expected to remove content on trust).
Thus Action Point 17 suggests that platforms could benefit
from submitting cases of doubt to a third party to obtain advice. The Communication
suggests that this could apply especially where platforms find difficulty in
assessing the legality of a particular content item and it concerns a
potentially contentious decision. That, one might think, is an archetypal
situation in which the platform can (and, according to the objectives of the
Communication, should) keep the content up, protected by the fact that
illegality is not apparent.
The Commission goes on to say that ‘Self-regulatory bodies
or competent authorities play this role in different Member States’ and that
‘As part of the reinforced co-operation between online platforms and competent
authorities, such co-operation is strongly encouraged.” This is difficult to understand, since the
only competent authorities referred to in the Communication are trusted
flaggers who give notice in the first place.
The best that can be said about consistency with the
ECommerce Directive is that the Commission is taking on the role of encouraging
voluntary agreements envisaged for Member States in Recital (40). Nevertheless an
interpretation of Recital (40) that encourages removal without prior due
process could be problematic from the perspective of fundamental rights
and the presumption against prior restraint.
The Communication’s characterisation of the Directive as the general removal framework creates another problem. Member States are free to give
greater protection to hosts under their national legislation than under the
Directive. So, for instance, our Defamation
Act 2013 provides website operators with complete immunity from defamation liability for content posted by identifiable
users. Whilst a website operator that receives
notice and does not remove the content may lose the umbrella protection of the
Directive, it will still be protected from liability under English defamation
law. There is no expectation that a website operator in that position should
take action after receiving notice.
This shield, specific to defamation, was introduced in the
2013 Act because of concerns that hosts were being put in the position of being
forced to choose between defending material as though they had written it
themselves (which they were ill-placed to do) or taking material down
immediately.
In seeking to construct a quasi-voluntary removal framework
in which the expectation is that content will be removed on receipt of sufficient notice,
the Communication ignores the possibility that under a Member State's own
national law the host may not be at risk at all for a particular kind of
liability and there is no expectation that it should remove notified content.
A social media website would be entirely
justified under the Defamation Act in ignoring a notice regarding an allegedly
defamatory post by an identifiable author.
Yet the Communication expects its notice and action procedures to cover
defamation. Examples like this put the Communication on a potential collision course with
Member States' national laws.
This is a subset of a bigger problem with the Communication,
namely its failure to address the issues raised by differences in substantive content laws as
between Member States.
National laws v
coherent EU strategy
There is deep tension, which the Communication makes no
attempt to resolve, between the plan to create a coherent EU-wide voluntary
removal process and the significant differences between the substantive content
laws of EU Member States.
In many areas (even in harmonised fields such as copyright) EU
Member States have substantively different laws about what is and isn't illegal.
The Communication seeks to put in place EU-wide removal procedures, but makes
no attempt to address which Member State's law is to be applied, in what
circumstances, or to what extent. This is a fundamental flaw.
If a platform receives (say) a hate speech notification from
an authority in Member State X, by which Member State's law is it supposed to address
the illegality?
If Member State X has especially restrictive laws, then removal
may well deprive EU citizens in other Member States of content that is perfectly
legal in their countries.
If the answer is to put in place geo-blocking, that is
mentioned nowhere in the Communication and would hardly be consistent with the
direction of other Commission initiatives.
If the content originated from a service provider in a
different Member State from Member State X, a takedown request by the
authorities in Member State X could well violate the internal market provisions
of the ECommerce Directive.
None of this is addressed in the Communication, beyond the
bare acknowledgment that legality of content is governed by individual Member
States' laws.
This issue is all the more pertinent since the CJEU has
specifically referred to it in the context of filtering obligations. In
SABAM/Netlog (a copyright case) the
Court said:
"Moreover, that injunction
could potentially undermine freedom of information, since that system might not
distinguish adequately between unlawful content and lawful content, with the
result that its introduction could lead to the blocking of lawful
communications.
Indeed, it is not contested that
the reply to the question whether a transmission is lawful also depends on the
application of statutory exceptions to copyright which vary from one Member
State to another.
In addition, in some Member
States certain works fall within the public domain or may be posted online free
of charge by the authors concerned."
A scheme intended to operate in a coherent way across the
European Union cannot reasonably ignore the impact of differences in Member State
laws. Nor is it apparent how post-removal procedures could assist in resolving
this issue.
The question of differences between Member States' laws also arises in the context of reporting criminal offences.
The Communication states that platforms should report to law
enforcement authorities whenever they are made aware of or encounter evidence
of criminal or other offences (Action Point 18).
According to the Communication this could range from abuse
of services by organised criminal or terrorist groups (citing Europol’s SIRIUS
counter terrorism portal) through to offers and sales of products and
commercial practices that are not compliant with EU legislation.
This requirement creates more issues with differences
between EU Member State laws. Some activities (defamation is an example)
are civil in some countries and criminal in others. Putting aside the question
whether the substantive scope of defamation law is the same across the EU, is a
platform expected to report defamatory content to the police in a Member State
in which defamation is criminal and where the post can be read, when in another
Member State (perhaps the home Member State of the author) it would attract at
most civil liability? The Communication is silent on the question.
Illegality v terms of service
The Commission says (Action Point 21) that platform
transparency should reflect both the treatment of illegal content and content
which does not respect the platform’s terms of service. [4.2.1]
The merits of transparency aside, it is unclear why the
Communication has ventured into the separate territory of platform content
policies that do not relate to illegal content. The Communication states
earlier that although there are public interest concerns around content which
is not necessarily illegal but potentially harmful, such as fake news or content
harmful for minors, the focus of the Communication is on the detection and
removal of illegal content.
There has to be a suspicion that platforms’
terms of service would end up taking the place of illegality as the criterion
for takedown, thus avoiding the issues of different laws in different Member
States and, given the potential breadth of terms of service, almost certainly resulting in over-removal compared with removal by reference to illegality. The apparent
practice of the EU IRU in relying on terms of service as well as illegality has
already been noted.
This effect would be magnified if pressure were brought to bear on
platforms to make their terms of service more restrictive. We can see potential
for this in the Policy Exchange publication The
New Netwar:
“Step 1: Ask the companies to
revise and implement more stringent Codes of Conduct/Terms of Service
that explicitly reject extremism.
At present, the different tech
companies require users to abide by ‘codes of conduct’ of varying levels of
stringency. … This is a useful start-point, but it is clear that they need to
go further now in extending the definition of what constitutes unacceptable
content. …
… it is clear that there need to
be revised, more robust terms of service, which set an industry-wide, robust
set of benchmarks. The companies must be pressed to act as a corporate body to
recognise their ‘responsibility’ to prevent extremism as an integral feature of
a new code of conduct. … The major companies must then be proactive in
implementing the new terms of trade. In so doing, they could help effect a
sea-change in behaviour, and help to define industry best practice.”
In this scheme of things platforms’ codes of conduct and
terms of service become tools of government policy rather than a reflection of
each platform’s own culture or protection for the platform in the event of a decision to remove
a user’s content.
Annex - The Communication’s 30 Action Points
State authorities
1. Online platforms should be able to make swift decisions as regards possible actions with respect to illegal content online without being required to do so on the basis of a court order or administrative decision, especially where a law enforcement authority identifies and informs them of allegedly illegal content. [3.1]
2. At the same time, online platforms should put in place adequate safeguards when giving effect to their responsibilities in this regard, in order to guarantee users’ right of effective remedy. [3.1]
3. Online platforms should therefore have the necessary resources to understand the legal frameworks in which they operate. [3.1]
4. They should ensure that they can be rapidly and effectively contacted for requests to remove illegal content expeditiously and also in order to, where appropriate, alert law enforcement to signs of online criminal activity. [3.1]
5. Law enforcement and other competent authorities should co-operate with another to define effective digital interfaces for fast and reliable submission of notification and to ensure efficient identification and reporting of illegal content. [3.1]
Notices from trusted flaggers
6. Online platforms are encouraged to make use of existing networks of trusted flaggers. [3.2.1]
7. Criteria for an entity to be considered a trusted flagger based on fundamental rights and democratic values could be agreed by the industry at EU level, through self-regulatory mechanisms or within the EU standardisation framework. [3.2.1]
8. In a limited number of cases platforms may remove notified content without further verifying the legality of the content themselves. For these cases trusted flaggers could be subject to audit and a certification scheme. [3.2.1]
9. Where there are abuses of trusted flagger mechanisms against established standards, the privilege of a trusted flagger status should be removed. [3.2.1]
Notices by ordinary users
10. Online platforms should establish an easily an easily accessible and user-friendly mechanism allowing users to notify hosted content considered to be illegal. [3.2.2]
Quality of notices
11. Online platforms should put in place effective mechanisms to facilitate the submission of notices that are sufficiently precise and adequately substantiated. [3.2.3]
12. Users should not normally be obliged to identify themselves when giving a notice unless it is required to determine the legality of the content. They should be encouraged to use trusted flaggers, where they exist, if they wish to maintain anonymity. Notice providers should have the opportunity voluntarily to submit contact details. [3.2.3]
Proactive measures by platforms
13. Online platforms should adopt effective proactive measures to detect and remove illegal content online and not only limit themselves to reacting to notices. [3.3.1]
14. The Commission strongly encourages online platforms to use voluntary, proactive measures aimed at the detection and removal of illegal content and to step up cooperation and investment in, and use of, automatic detection technologies. [3.3.2]
Removing illegal content
15. Fully automated deletion or suspension of content should be applied where the circumstances leave little doubt about the illegality of the material. [4.1]
16. As a general rule removal in response to trusted flagger notices should be addressed more quickly. [4.1]
17. Platforms could benefit from submitting cases of doubt to a third party to obtain advice. [4.1]
18. Platforms should report to law enforcement authorities whenever they are made aware of or encounter evidence of criminal or other offences. [4.1]
Transparency
19. Online platforms should disclose their detailed content policies in their terms of service and clearly communicate this to their users. [4.2.1]
20. The terms should not only define the policy for removing or disabling content, but also spell out the safeguards that ensure that content-related measures do not lead to over-removal, such as contesting removal decisions, including those triggered by trusted flaggers. [4.2.1]
21. This should reflect both the treatment of illegal content and content which does not respect the platform’s terms of service. [4.2.1]
22. Platforms should publish at least annual transparency reports with information on the number and type of notices received and actions taken; time taken for processing and source of notification; counternotices and responses to them. [4.2.2]
Safeguards against over-removal
23. In general those who provided the content should be given the opportunity to contest the decision via a counternotice, including when content removal has been automated. [4.3.1]
24. If the counter-notice has provided reasonable grounds to consider that the notified activity or information is not illegal, the platform should restore the content that was removed without undue delay or allow re-upload by the user. [4.3.1]
25. However in some circumstances allowing for a counternotice would not be appropriate, in particular where this would interfere with investigative powers of authorities necessary for prevention, detection and prosecution of criminal offences. [4.3.1]
Bad faith notices and counter-notices
26. Abuse of notice and action procedures should be strongly discouraged.
a. For instance by de-prioritising notices from a provider who sends a high rate of invalid notices or receives a high rate of counter-notices. [4.3.2]
b. Or by revoking trusted flagger status according to well-established and transparent criteria. [4.3.2]
Measures against repeat infringers
27. Online platforms should take measures which dissuade users from repeatedly uploading illegal content of the same nature. [5.1]
28. They should aim to effectively disrupt the dissemination of such illegal content. Such measures would include account suspension or termination. [5.1]
29. Platforms are strongly encouraged to use fingerprinting tools to filter out content that has already been identified and assessed as illegal. [5.2]
30. Online platforms should continuously update their tools to ensure all illegal content is captured. Technological development should be carried out in co-operation with online platforms, competent authorities and other stakeholders including civil society. [5.2]
[Updated 19 February 2018 to add reference to Yildirim v Turkey; and 20 June 2018 to add reference to the subsequent Commission Recommendation.]