Wednesday, 25 October 2017

Towards a filtered internet: the European Commission’s automated prior restraint machine

The European Commission recently published a Communication on Tackling Illegal Content Online.  Its subtitle, Towards an enhanced responsibility of online platforms, summarises the theme: persuading online intermediaries, specifically social media platforms, to take on the job of policing illegal content posted by their users. [See also the Commission's follow-up Recommendation on Measures to Effectively Tackle Illegal Content Online, published 1 March 2018.]

The Commission wants the platforms to perform eight main functions (my selection and emphasis):

  1. Online platforms should be able to take swift decisions on action about illegal content without a court order or administrative decision, especially where notified by a law enforcement authority. (Communication, para 3.1)
  2. Platforms should prioritise removal in response to notices received from law enforcement bodies and other public or private sector 'trusted flaggers'. (Communication, para 4.1)
  3. Fully automated removal should be applied where the circumstances leave little doubt about the illegality of the material (such as where the removal is notified by law enforcement authorities). (Communication, para 4.1)
  4. In a limited number of cases platforms may remove content notified by trusted flaggers without verifying legality themselves. (Communication, para 3.2.1)
  5. Platforms should not limit themselves to reacting to notices but adopt effective proactive measures to detect and remove illegal content. (Communication, para 3.3.1)
  6. Platforms should take measures (such as account suspension or termination) which dissuade users from repeatedly uploading illegal content of the same nature. (Communication, para 5.1)
  7. Platforms are strongly encouraged to use fingerprinting tools to filter out content that has already been identified and assessed as illegal. (Communication, para 5.2)
  8. Platforms should report to law enforcement authorities whenever they are made aware of or encounter evidence of criminal or other offences. (Communication, para 4.1)

It can be seen that the Communication does not stop with policing content. The Commission wants the platforms to act as detective, informant, arresting officer, prosecution, defence, judge, jury and prison warder: everything from sniffing out content and deciding whether it is illegal to locking the impugned material away from public view and making sure the cell door stays shut. When platforms aren’t doing the detective work themselves they are expected to remove users’ posts in response to a corps of ‘trusted flaggers’, sometimes without reviewing the alleged illegality themselves. None of this with a real judge or jury in sight. 

In May 2014 the EU Council adopted its Human Rights Guidelines on Freedom of Expression Online and OfflineThe Guidelines say something about the role of online intermediaries.  Paragraph 34 states:
“The EU will … c) Raise awareness among judges, law enforcement officials, staff of human rights commissions and policymakers around the world of the need to promote international standards, including standards protecting intermediaries from the obligation of blocking Internet content without prior due process.” (emphasis added)
A leaked earlier draft of the Communication referenced the Guidelines. The reference was removed from the final version It would certainly have been embarrassing for the Communication to refer to that document. Far from “protecting intermediaries from the obligation of blocking Internet content without prior due process”, the premise of the Communication is that intermediaries should remove and filter content without prior due process.  The Commission has embraced the theory that platforms ought to act as gatekeepers rather than gateways, filtering the content that their users upload and read.

Article 15, where are you?

Not only is the Communication's approach inconsistent with the EU’s Freedom of Expression Guidelines, it challenges a longstanding and deeply embedded piece of EU law. Article 15 of the ECommerce Directive has been on the statute book for nearly 20 years. It prohibits Member States from imposing general monitoring obligations on online intermediaries. Yet the Communication says that online platforms should “adopt effective proactive measures to detect and remove illegal content online and not only limit themselves to reacting to notices which they receive”.  It “strongly encourages” online platforms to step up cooperation and investment in, and use of, automatic detection technologies.

A similar controversy around conflict with Article 15 has been stirred up by Article 13 of the Commission’s proposed Digital Single Market Copyright Directive, also in the context of filtering.

Although the measures that the Communication urges on platforms are voluntary (thus avoiding a direct clash with Article 15) that is more a matter of form than of substance. The velvet glove openly brandishes a knuckleduster: the explicit threat of legislation if the platforms do not co-operate.
“The Commission will continue exchanges and dialogues with online platforms and other relevant stakeholders. It will monitor progress and assess whether additional measures are needed, in order to ensure the swift and proactive detection and removal of illegal content online, including possible legislative measures to complement the existing regulatory framework. This work will be completed by May 2018.”
The platforms have been set their task and the big stick will be wielded if they don't fall into line. It is reminiscent of the UK government’s version of co-regulation: 
“Government defines the public policy objectives that need to be secured, but tasks industry to design and operate self-regulatory solutions and stands behind industry ready to take statutory action if necessary.” (e-commerce@its.best.uk, Cabinet Office 1999.)
So long as those harnessed to the task of implementing policy don’t kick over the traces, the uncertain business of persuading a democratically elected legislature to enact a law is avoided.

The Communication displays no enthusiasm for Article 15.  It devotes nearly two pages of close legal analysis to explaining why, in its view, adopting proactive measures would not deprive a platform of hosting protection under Article 14 of the Directive.  Article 15, in contrast, is mentioned in passing but not discussed.  

Such tepidity is the more noteworthy when the balance between fundamental rights reflected in Article 15 finds support in, for instance, the recent European Court of Human Rights judgment in Tamiz v Google ([84] and [85]). Article 15 is not something that can or should be lightly ignored or manoeuvred around.

For all its considerable superstructure - trusted flaggers, certificated standards, reversibility safeguards, transparency and the rest - the Communication lacks solid foundations. It has the air of a castle built on a chain of quicksands: presumed illegality, lack of prior due process at source, reversal of the presumption against prior restraint, assumptions that illegality is capable of precise computation, failure to grapple with differences in Member States' laws, and others.


Whatever may be the appropriate response to illegal content on the internet – and no one should pretend that this is an easy issue – it is hard to avoid the conclusion that the Communication is not it.

The Communication has already come
in for criticism (here, here, here, here and here). At risk of repeating points already well made, this post will take an issue by issue dive into its foundational weaknesses.

To aid this analysis I have listed 30 identifiable prescriptive elements of the Communication. They are annexed as the Communication's Action Points (my label). Citations in the post are to that list and to paragraph numbers in the Communication.

Index to Issues and Annex












Presumed illegal

Underlying much of the Communication’s approach to tackling illegal content is a presumption that, once accused, content is illegal until proven innocent. That can be found in:
  • its suggestion that content should be removed automatically on the say-so of certain trusted flaggers (Action Points 8 and 15, paras 3.2.1 and 4.1);
  • its emphasis on automated detection technologies (Action Point 14, para 3.3.2);
  • its greater reliance on corrective safeguards after removal than preventative safeguards before (paras 4.1, 4.3);
  • the suggestion that platforms’ performance should be judged by removal rates (the higher the better) (see More is better, faster is best below);
  • the suggestion that in difficult cases the platform should seek third party advice instead of giving the benefit of the doubt to the content (Action Point 17, para 4.1);
  • its encouragement of quasi-official databases of ‘known’ illegal content, but without a legally competent determination of illegality (Action Point 29, para 5.2). 

Taken together these add up to a presumption of illegality, implemented by prior restraint.

In one well known case a tweet provoked a criminal prosecution, resulting in a magistrates’ court conviction. Two years later the author was acquitted on appeal. Under the trusted flagger system the police could notify such a tweet to the platform at the outset with every expectation that it would be removed, perhaps automatically (Action Points 8 and 15, paras 3.2.1, 4.1).  A tweet ultimately found to be legal would most probably have been removed from public view, without any judicial order. 

Multiply that thousands of times, factor in that speedy removal will often be the end of the matter if no prosecution takes place, and we have prior permanent restraint on a grand scale.

Against that criticism it could be said that the proposed counter notice arrangements provide the opportunity to reverse errors and doubtful decisions.  However by that time the damage is done.  The default has shifted to presumed illegality, inertia takes over and many authors will simply shrug their shoulders and move on for the sake of a quiet life.  

If the author does object, the material would not automatically be reinstated. The Communication suggests that reinstatement should take place if the counter-notice provides reasonable grounds to consider that removed content is not illegal. The burden has shifted to the author to establish innocence.

If an author takes an autonomous decision to remove something that they have previously posted, that is their prerogative. No question of interference with freedom of speech arises.  But if the suppression results from a state-fostered system that institutionalises removal by default and requires the author to justify its reinstatement, there is interference with freedom of speech of both the author and those who would otherwise have been able to read the post. It is a classic chilling effect.

Presumed illegality does not feature in any set of freedom of speech principles, offline or online.  Quite the opposite. The traditional presumption against prior restraint, forged in the offline world, embodies the principle that accused speech should have the benefit of the doubt. It should be allowed to stay up until proved illegal. Even then, the appropriate remedy may only be damages or criminal sanction, not necessarily removal of the material from public view.  Only exceptionally should speech be withheld from public access pending an independent, fully considered, determination of legality with all due process.

Even in these days of interim privacy injunctions routinely outweighing freedom of expression, presumption against prior restraint remains the underlying principle. The European Court of Human Rights observed in Spycatcher that “the dangers inherent in prior restraints are such that they call for the most careful scrutiny on the part of the Court”.

In Mosley v UK the ECtHR added a gloss that prior restraints may be "more readily justified in cases which demonstrate no pressing need for immediate publication and in which there is no obvious contribution to a debate of general public interest". Nevertheless the starting point remains that the prior restraint requires case by case justification. All the more so for automated prior restraint on an industrial scale with no independent consideration of the merits.

Due process at source

A keystone of the Communication is the proposed system of 'trusted flaggers' who offer particular expertise in notifying the presence of potentially illegal content: “specialised entities with specific expertise in identifying illegal content, and dedicated structures for detecting and identifying such content online” (para 3.2.1)

Trusted flaggers “can be expected to bring their expertise and work with high quality standards, which should result in higher quality notices and faster take-downs” (para 3.2.1).  Platforms would be expected to fast-track notices from trusted flaggers. The Commission proposes to explore with industry the potential of standardised notification procedures.

Trusted flaggers would range from law enforcement bodies to copyright owners. The Communication names the Europol Internet Referral Unit for terrorist content and the INHOPE network of reporting hotlines for child sexual abuse material as trusted flaggers. It also suggests that civil society organisations or semi-public bodies are specialised in the reporting of illegal online racist and xenophobic material.

The emphasis is notably on practical expertise rather than on legal competence to determine illegality. This is especially significant for the proposed role of law enforcement authorities as trusted flaggers. The police detect crime, apply to court for arrest or search warrants, execute them, mostly hand over cases to prosecutors and give evidence in court. They may have practical competence in combating illegal activities, but they do not have legal competence to rule on legality or illegality (see below Legal competence v practical competence).

The system under which the Europol IRU sends takedown notices to platforms is illustrative. Thousands - in the case of the UK's similar Counter Terrorism Internet Referral Unit (CTIRU) hundreds of thousands - of items of content are taken down on the say-so of the police, with safeguards against overreaching dependent on the willingness and resources of the platforms to push back. 

It is impossible to know whether such systems are ‘working’ or not, since there is (and is meant to be) no public visibility and evaluation of what has been removed. 

As at July 2017 the CTIRU had removed some 270,000 items since 2010.  A recent freedom of information request by the Open Rights Group for a list of the kind of “statistical records, impact assessments and evaluations created and kept by the Counter Terrorism Internet Referrals Unit in relation to their operations” was rejected on grounds that it would compromise law enforcement by undermining the operational effectiveness of the CTIRU and have a negative effect on national security.

In the UK PIPCU (the Police Intellectual Property Crime Unit) is the specialist intellectual property enforcement unit of the City of London Police. One of PIPCU’s activities is to write letters to domain registrars asking them to suspend domains used for infringing activities. Its registrar campaign led to this reminder from a US arbitrator of the distinction between the police and the courts:
“To permit a registrar of record to withhold the transfer of a domain based on the suspicion of a law enforcement agency, without the intervention of a judicial body, opens the possibility for abuse by agencies far less reputable than the City of London Police. Presumably, the provision in the Transfer Policy requiring a court order is based on the reasonable assumption that the intervention of a court and judicial decree ensures that the restriction on the transfer of a domain name has some basis of “due process” associated with it.”
A law enforcement body not subject to independent due process (such as applying for a court order) is at risk of overreach, whether through over-enthusiasm for the cause of crime prevention, succumbing to groupthink or some other reason. Due process at source is designed to prevent that. Safeguards at the receiving end do not perform the same role of keeping official agencies in check.

The Communication suggests (Action Point 8, 3.2.1) that in ‘a limited number of cases’ platforms could remove content notified by certain trusted flaggers without verifying legality themselves. 

What might these 'limited cases' be?  Could it apply to both state and private trusted flaggers? Would it apply to any kind of content and any kind of illegality, or only to some? Would it apply only where automated systems are in place? Would it apply only where a court has authoritatively determined that the content is illegal? Would it apply only to repeat violations? The Communication does not tell us. Where it would apply, absence of due process at source takes on greater significance.

Would it perhaps cover the same ground as Action Point 15, under which fully automated deletion should be applied where "circumstances leave little doubt about the illegality of the material", for example (according to the Communication) when notified by law enforcement authorities?

When we join the dots of the various parts of the Communication the impression is of significant expectation that instead of considering the illegality of content notified by law enforcement, platforms may assume that it is illegal and automatically remove it.

The Communication contains little to ensure that trusted flaggers make good decisions. Most of the safeguards are post-notice and post-removal and consist of procedures to be implemented by the platforms. As to specific due process obligations on the notice giver, the Communication is silent.

The contrast between this and the Freedom of Expression Guidelines noted earlier is evident. The Guidelines emphasise prior due process. The Communication emphasises ex post facto remedial safeguards to be put in place by the platforms. Those are expected to compensate for absence of due process on the part of the authority giving notice in the first place.

Legal competence v practical competence

The Communication opens its section on notices from state authorities by referring to courts and competent authorities able to issue binding orders or administrative decisions requiring online platforms to remove or block illegal content. Such bodies, it may reasonably be assumed, would incorporate some element of due process in their decision-making prior to the issue of a legally binding order.

However we have seen that the Communication abandons that limitation, referring to ‘law enforcement and other competent authorities’. A ‘competent authority’ is evidently not limited to bodies embodying due process and legally competent to determine illegality.

It includes bodies such as the police, who are taken to have practical competence through familiarity with the subject matter. Thus in the Europol EU Internet Referral Unit “security experts assess and refer terrorist content to online platforms”.

It is notable that this section in the earlier leaked draft did not survive the final edit:
“In the EU, courts and national competent authorities, including law enforcement authorities, have the competence to establish the illegality of a given activity or information online.” (emphasis added)
Courts are legally competent to establish legality or illegality, but law enforcement bodies are not. 

In the final version the Commission retreats from the overt assertion that law enforcement authorities are competent to establish illegality:
“In the EU, courts and national competent authorities, including law enforcement authorities, are competent to prosecute crimes and impose criminal sanctions under due process relating to the illegality of a given activity or information online.”
However by rolling them up together this passage blurs the distinction between the police, prosecutors and the courts.

If the police are to be regarded as trusted flaggers, one can see why the Communication might want to treat them as competent to establish illegality.  But no amount of tinkering with the wording of the Communication, or adding vague references to due process, can disguise the fact that the police are not the courts.

Even if we take practical competence of trusted flaggers as a given, the Communication does not discuss the standard to which trusted flaggers should evaluate content. Would the threshold be clearly illegal, potentially illegal, arguably illegal, more likely than not to be illegal, or something else? The omission is striking when compared with the carefully crafted proposed standard for reinstatement: "reasonable grounds to consider that the notified activity or information is not illegal".

The elision of legal competence and practical competence is linked to the lack of insistence on prior due process.  In an unclear case a legally competent body such as a court can make a binding determination one way or the other that can be relied upon.  A trusted flagger cannot do so, however expert and standards compliant it may be.  The lower the evaluation threshold, the more the burden is shifted on to the platform to make an assessment which it is not competent legally, and unlikely to be competent practically, to do. Neither the trusted flagger nor the platform is a court or a substitute for a court.

Due process v quality standards

The Commission has an answer to absence of due process at source. It suggests that trusted flaggers could achieve a kind of quasi-official qualification:
“In order to ensure a high quality of notices and faster removal of illegal content, criteria based notably on respect for fundamental rights and of democratic values could be agreed by the industry at EU level.

This can be done through self-regulatory mechanisms or within the EU standardisation framework, under which a particular entity can be considered a trusted flagger, allowing for sufficient flexibility to take account of content-specific characteristics and the role of the trusted flagger.” (para 3.2.1)
The Communication suggests criteria such as internal training standards, process standards, quality assurance, and legal safeguards around independence, conflicts of interest, protection of privacy and personal data.  These would have sufficient flexibility to take account of content-specific characteristics and the role of the trusted flagger.  The Commission intends to explore, in particular in dialogues with the relevant stakeholders, the potential of agreeing EU-wide criteria for trusted flaggers.

The Communication's references to internal training standards, process standards and quality assurance could have been lifted from the manual for a food processing plant. But what we write online is not susceptible of precise measurement of size, temperature, colour and weight. With the possible exception of illegal images of children (but exactly what is illegal still varies from one country to another) even the clearest rules are fuzzy around the edges. For many speech laws, qualified with exceptions and defences, the lack of precision extends further. Some of the most controversial such as hate speech and terrorist material are inherently vague. Those are among the most highly emphasised targets of the Commission’s scheme.

A removal scheme cannot readily be founded on the assumption that legality of content can always (or even mostly) be determined like grading frozen peas, simply by inspecting the item - whether the inspector be a human being or a computer.

No amount of training – of computers or people - can turn a qualitative evaluation into a precise scientific measurement.  Even for a well-established takedown practice such as copyright, it is unclear how a computer could reliably identify exceptions such as parody or quotation when human beings themselves argue about their scope and application.

The appropriateness of removal based on a mechanical assessment of illegality is also put in question by the recent European Court of Human Rights decision in Tamiz v Google, a defamation case.  The court emphasised that a claimant’s Article 8 rights are engaged only where the material reaches a minimum threshold of seriousness. It may be disproportionate to remove trivial comments, even if they are technically unlawful.  A removal system that asks only “legal or illegal?” may not be posing all the right questions.


Manifest illegality v contextual information

The broader issue raised in the previous section is that knowledge of content (whether by the notice giver or the receiving platform) is not the same as knowledge of illegality, even if the question posed is the simple "legal or illegal?".  

As Eady J. said in Bunt v Tilley in relation to defamation: “In order to be able to characterise something as ‘‘unlawful’’ a person would need to know something of the strength or weakness of available defences". Caselaw under the ECommerce Directive contains examples of platforms held not to be fixed with knowledge of illegality until a considerable period after the first notification was made. The point applies more widely than defamation.

The Commission is aware that this is an issue:
“In practice, different content types require a different amount of contextual information to determine the legality of a given content item. For instance, while it is easier to determine the illegal nature of child sexual abuse material, the determination of the illegality of defamatory statements generally requires careful analysis of the context in which it was made.” (para 4.1)
However, to identify the problem is not to solve it.  Unsurprisingly, the Communication struggles to find a consistent approach. At one point it advocates adding humans into the loop of automated illegality detection and removal by platforms:
“This human-in-the-loop principle is, in general, an important element of automatic procedures that seek to determine the illegality of a given content, especially in areas where error rates are high or where contextualisation is necessary.” (para 3.3.2)
It suggests improving automatic re-upload filters, giving the examples of the existing ‘Database of Hashes’ in respect of terrorist material, child sexual abuse material and copyright. It says that could also apply to other material flagged by law enforcement authorities. But it also acknowledges their limitations:
“However, their effectiveness depends on further improvements to limit erroneous identification of content and to facilitate context-aware decisions, as well as the necessary reversibility safeguards.” (para 5.2)
In spite of that acknowledgment, for repeated infringement the Communication proposes that: “Automatic stay-down procedures should allow for context-related exceptions”. (para 5.2) 

Placing ‘stay-down’ obligations on platforms is in any event a controversial area, not least because the monitoring required is likely to conflict with Article 15 of the ECommerce Directive.

Inability to deal adequately with contextuality is no surprise. It raises serious questions of consistency with the right of freedom of expression. The SABAM/Scarlet and SABAM/Netlog cases in the CJEU made clear that filtering carrying a risk of wrongful blocking constitutes an interference with freedom of expression. It is by no means obvious, taken with the particular fundamental rights concerns raised by prior restraint, that that can be cured by putting "reversibility safeguards" in place.

Illegality on the face of the statute v prosecutorial discretion

Some criminal offences are so broadly drawn that, in the UK at least, prosecutorial discretion becomes a significant factor in mitigating potential overreach of the offence. In some cases prosecutorial guidelines have been published (such as the English Director of Public Prosecution’s Social Media Prosecution Guidelines). 

Take the case of terrorism.  The UK government argued before the Supreme Court, in a case about a conviction for dissemination of terrorist publications by uploading videos to YouTube (R v Gul [2013] UKSC 64, [30]), that terrorism was deliberately very widely defined in the statute, but prosecutorial discretion was vested in the DPP to mitigate the risk of criminalising activities that should not be prosecuted. The DPP is independent of the police.

The Supreme Court observed that this amounted to saying that the legislature had “in effect delegated to an appointee of the executive, albeit a respected and independent lawyer, the decision whether an activity should be treated as criminal for the purpose of prosecution”.

It observed that that risked undermining the rule of law since the DPP, although accountable to Parliament, did not make open, democratically accountable decisions in the same way as Parliament. Further, “such a device leaves citizens unclear as to whether or not their actions or projected actions are liable to be treated by the prosecution authorities as effectively innocent or criminal - in this case seriously criminal”. It described the definition of terrorism as “concerningly wide” ([38]).

Undesirable as this type of lawmaking may be, it exists. An institutional removal system founded on the assumption that content can identified in a binary way as ‘legal/not legal’ takes no account of broad definitions of criminality intended to be mitigated by prosecutorial discretion.

The existing formal takedown procedure under the UK’s Terrorism Act 2006, empowering a police constable to give a notice to an internet service provider, could give rise to much the same concern.  The Act requires that the notice should contain a declaration that, in the opinion of the constable giving it, the material is unlawfully terrorism-related.  There is no independent mitigation mechanism of the kind that applies to prosecutions.   

In fact the formal notice-giving procedure under the 2006 Act appears never to have been used, having been supplanted by the voluntary procedure involving the Counter Terrorism Internet Referrals Unit (CTIRU) already described. 

Offline v online

The Communication opens its second paragraph with the now familiar declamation that ‘What is illegal offline is also illegal online’. But if that is the desideratum, are the online protections for accused speech comparable with those offline? 

The presumption against prior restraint applies to traditional offline publishers who are indubitably responsible for their own content. Even if one holds the view that platforms ought to be responsible for users’ content as if it were their own, equivalence with offline does not lead to a presumption of illegality and an expectation that notified material will be removed immediately, or even at all.  

Filtering and takedown obligations introduce a level of prior restraint online that does not exist offline. They are exercised against individual online self-publishers and readers (you and me) via a side door: the platform. Sometimes this will occur prior to publication, always prior to an independent judicial determination of illegality (cf Yildirim v Turkey [52]).

The trusted flagger system would institutionalise banned lists maintained by police authorities and government agencies. Official lists of banned content may sound like tinfoil hat territory. But as already noted, platforms will be encouraged to operate automatic removal processes, effectively assuming the illegality of content notified by such sources (Action Points 8 and 15).  

The Commission cites the Europol IRU as a model. The Europol IRU appears not even to restrict itself to notifying illegal content.  In reply to an MEP’s question earlier this year the EU Home Affairs Commissioner said:
“The Commission does not have statistics on the proportion of illegal content referred by EU IRU. It should be noted that the baseline for referrals is its mandate, the EU legal framework on terrorist offences as well as the terms and conditions set by the companies. When the EU IRU scans for terrorist material it refers it to the company where it assesses it to have breached their terms and conditions.”
This blurring of the distinction between notification on grounds of illegality and notification for breach of platform terms and conditions is explored further below (Illegality v terms of service). 

A recent Policy Exchange paper The New Netwar makes another suggestion. An expert-curated data feed of jihadist content (it is unclear whether this would go further than illegal content) that is being shared should be provided to social media companies. The paper suggests that this would be overseen by the government’s proposed Commission for Countering Extremism, perhaps in liaison with GCHQ.

It may be said that until the internet came along individuals did not have the ability to write for the world, certainly not in the quantity that occurs today; and we need new tools to combat online threats.  

If the point is that the internet is in fact different from offline, then we should carefully examine those differences, consider where they may lead and evaluate the consequences. Simply reciting the mantra that what is illegal offline is illegal online does not suffice when removal mechanisms are proposed that would have been rejected out of hand in the offline world.

Some may look back fondly on a golden age in which no-one could write for the public other than via the moderating influence of an editor’s blue pencil and spike.   The internet has broken that mould. Each one of us can speak to the world. That is profoundly liberating, profoundly radical and, to some no doubt, profoundly disturbing. If the argument is that more speech demands more ways of controlling or suppressing speech, that would be better made overtly than behind the slogan of offline equivalence.

More is better, faster is best

Another theme that underlies the Communication is the focus on removal rates and the suggestion that more and faster removal is better.

But what is a ‘better’ removal rate? A high removal rate could indicate that trusted flaggers were notifying only content that is definitely unlawful.  Or it could mean that platforms were taking notices largely on trust.  Whilst ‘better’ removal rates might reflect better decisions, pressure to achieve higher and speedier removal rates may equally lead to worse decisions. 

There is a related problem with measuring speed of removal. From what point do you measure the time taken to remove? The obvious answer is, from when a notice is received. But although conveniently ascertainable, that is arbitrary.  As discussed above, knowledge of content is not the same as knowledge of illegality.

Not only is a platform not at risk of liability until it has knowledge of illegality, but if the overall objective of the Commission's scheme is removal of illegal content (and only of illegal content) then the platform positively ought not to take down material unless and until it knows for sure that the material is illegal.  While the matter remains in doubt the material should stay up.  

Any meaningful measure of removal speed should look at the period elapsed after knowledge was acquired, in addition to or instead of the period from first notification.  A compiler of statistics should look at the history of each removal (or non-removal) and make an independent evaluation of when (if at all) knowledge of the illegality was acquired – effectively repeating and second-guessing the platform’s own evaluation. That exercise becomes more challenging if platforms are taking notices on trust and removing without performing their own evaluation.

This is a problem created by the Commission’s desire to convert a liability shield (the ECommerce Directive) into a removal tool.

Liability shield v removal tool

At the heart of the Communication (and of the whole ‘Notice and Action’ narrative that the Commission has been promoting since 2010) is an attempt to reframe the hosting liability shield of the ECommerce Directive as a positive obligation on the host to take action when it becomes aware of unlawful content or behaviour, usually when it receives notice.

The ECommerce Directive incentivises, but does not obligate, a host to take action.  This is a nuanced but important distinction. If the host does not remove the content expeditiously after becoming aware of illegality then the consequence is that it loses the protection of the shield. But a decision not to remove content after receiving notice does not of itself give rise to any obligation to take down the content.  The host may (or may not) then become liable for the user’s unlawful activity (if indeed it be unlawful). That is a question of the reach of each Member State’s underlying law, which may vary from one kind of liability to another and from one Member State to another.

This structure goes some way towards respecting due process and the presumption against prior restraint (see above). Even if the more realistic scenario is that a host will be over-cautious in its decisions, a host remains free to decide to leave the material up and bear the risk of liability after a trial or the risk of an interim court order requiring it to be removed. It has the opportunity to 'publish and be damned'.

The Communication contends that "at EU level the general legal framework for illegal content removal is the ECommerce Directive".  The Communication finds support for its ‘removal framework’ view in Recital (40) of the Directive:
“this Directive should constitute the appropriate basis for the development of rapid and reliable procedures for removing and disabling access to illegal information"
The Recital goes on to envisage voluntary agreements encouraged by Member States.

However the Directive is not in substance a removal framework.  It is up to the notice-giver to provide sufficient detail to fix the platform with knowledge of the illegality.  A platform has no obligation to enquire further in order to make a fully informed Yes/No decision. It can legitimately decline to take action on the basis of insufficient information.  

The Communication represents a subtle but significant shift from intermediary liability rules as protection from liability to a regime in which platforms are positively expected to act as arbiters, making fully informed decisions on legality and removal (at least when they are not being expected to remove content on trust).

Thus Action Point 17 suggests that platforms could benefit from submitting cases of doubt to a third party to obtain advice. The Communication suggests that this could apply especially where platforms find difficulty in assessing the legality of a particular content item and it concerns a potentially contentious decision. That, one might think, is an archetypal situation in which the platform can (and, according to the objectives of the Communication, should) keep the content up, protected by the fact that illegality is not apparent.

The Commission goes on to say that ‘Self-regulatory bodies or competent authorities play this role in different Member States’ and that ‘As part of the reinforced co-operation between online platforms and competent authorities, such co-operation is strongly encouraged.” This is difficult to understand, since the only competent authorities referred to in the Communication are trusted flaggers who give notice in the first place. 

The best that can be said about consistency with the ECommerce Directive is that the Commission is taking on the role of encouraging voluntary agreements envisaged for Member States in Recital (40). Nevertheless an interpretation of Recital (40) that encourages removal without prior due process could be problematic from the perspective of fundamental rights and the presumption against prior restraint.

The Communication’s characterisation of the Directive as the general removal framework creates another problem. Member States are free to give greater protection to hosts under their national legislation than under the Directive.  So, for instance, our Defamation Act 2013 provides website operators with complete immunity from defamation liability for content posted by identifiable users.  Whilst a website operator that receives notice and does not remove the content may lose the umbrella protection of the Directive, it will still be protected from liability under English defamation law. There is no expectation that a website operator in that position should take action after receiving notice.

This shield, specific to defamation, was introduced in the 2013 Act because of concerns that hosts were being put in the position of being forced to choose between defending material as though they had written it themselves (which they were ill-placed to do) or taking material down immediately.

In seeking to construct a quasi-voluntary removal framework in which the expectation is that content will be removed on receipt of sufficient notice, the Communication ignores the possibility that under a Member State's own national law the host may not be at risk at all for a particular kind of liability and there is no expectation that it should remove notified content.

A social media website would be entirely justified under the Defamation Act in ignoring a notice regarding an allegedly defamatory post by an identifiable author.  Yet the Communication expects its notice and action procedures to cover defamation. Examples like this put the Communication on a potential collision course with Member States' national laws.

This is a subset of a bigger problem with the Communication, namely its failure to address the issues raised by differences in substantive content laws as between Member States.

National laws v coherent EU strategy

There is deep tension, which the Communication makes no attempt to resolve, between the plan to create a coherent EU-wide voluntary removal process and the significant differences between the substantive content laws of EU Member States.

In many areas (even in harmonised fields such as copyright) EU Member States have substantively different laws about what is and isn't illegal. The Communication seeks to put in place EU-wide removal procedures, but makes no attempt to address which Member State's law is to be applied, in what circumstances, or to what extent. This is a fundamental flaw.

If a platform receives (say) a hate speech notification from an authority in Member State X, by which Member State's law is it supposed to address the illegality?

If Member State X has especially restrictive laws, then removal may well deprive EU citizens in other Member States of content that is perfectly legal in their countries.

If the answer is to put in place geo-blocking, that is mentioned nowhere in the Communication and would hardly be consistent with the direction of other Commission initiatives.

If the content originated from a service provider in a different Member State from Member State X, a takedown request by the authorities in Member State X could well violate the internal market provisions of the ECommerce Directive.

None of this is addressed in the Communication, beyond the bare acknowledgment that legality of content is governed by individual Member States' laws. 

This issue is all the more pertinent since the CJEU has specifically referred to it in the context of filtering obligations. In SABAM/Netlog (a copyright case) the Court said:
"Moreover, that injunction could potentially undermine freedom of information, since that system might not distinguish adequately between unlawful content and lawful content, with the result that its introduction could lead to the blocking of lawful communications. 
Indeed, it is not contested that the reply to the question whether a transmission is lawful also depends on the application of statutory exceptions to copyright which vary from one Member State to another.
In addition, in some Member States certain works fall within the public domain or may be posted online free of charge by the authors concerned."
A scheme intended to operate in a coherent way across the European Union cannot reasonably ignore the impact of differences in Member State laws. Nor is it apparent how post-removal procedures could assist in resolving this issue.

The question of differences between Member States' laws also arises in the context of reporting criminal offences. 

The Communication states that platforms should report to law enforcement authorities whenever they are made aware of or encounter evidence of criminal or other offences (Action Point 18).

According to the Communication this could range from abuse of services by organised criminal or terrorist groups (citing Europol’s SIRIUS counter terrorism portal) through to offers and sales of products and commercial practices that are not compliant with EU legislation.

This requirement creates more issues with differences between EU Member State laws. Some activities (defamation is an example) are civil in some countries and criminal in others. Putting aside the question whether the substantive scope of defamation law is the same across the EU, is a platform expected to report defamatory content to the police in a Member State in which defamation is criminal and where the post can be read, when in another Member State (perhaps the home Member State of the author) it would attract at most civil liability? The Communication is silent on the question.

Illegality v terms of service

The Commission says (Action Point 21) that platform transparency should reflect both the treatment of illegal content and content which does not respect the platform’s terms of service. [4.2.1]

The merits of transparency aside, it is unclear why the Communication has ventured into the separate territory of platform content policies that do not relate to illegal content. The Communication states earlier that although there are public interest concerns around content which is not necessarily illegal but potentially harmful, such as fake news or content harmful for minors, the focus of the Communication is on the detection and removal of illegal content. 

There has to be a suspicion that platforms’ terms of service would end up taking the place of illegality as the criterion for takedown, thus avoiding the issues of different laws in different Member States and, given the potential breadth of terms of service, almost certainly resulting in over-removal compared with removal by reference to illegality. The apparent practice of the EU IRU in relying on terms of service as well as illegality has already been noted.

This effect would be magnified if pressure were brought to bear on platforms to make their terms of service more restrictive. We can see potential for this in the Policy Exchange publication The New Netwar:
“Step 1: Ask the companies to revise and implement more stringent Codes of Conduct/Terms of Service that explicitly reject extremism.

At present, the different tech companies require users to abide by ‘codes of conduct’ of varying levels of stringency. … This is a useful start-point, but it is clear that they need to go further now in extending the definition of what constitutes unacceptable content. … 
… it is clear that there need to be revised, more robust terms of service, which set an industry-wide, robust set of benchmarks. The companies must be pressed to act as a corporate body to recognise their ‘responsibility’ to prevent extremism as an integral feature of a new code of conduct. … The major companies must then be proactive in implementing the new terms of trade. In so doing, they could help effect a sea-change in behaviour, and help to define industry best practice.”

In this scheme of things platforms’ codes of conduct and terms of service become tools of government policy rather than a reflection of each platform’s own culture or protection for the platform in the event of a decision to remove a user’s content.



Annex - The Communication’s 30 Action Points


State authorities

1.       Online platforms should be able to make swift decisions as regards possible actions with respect to illegal content online without being required to do so on the basis of a court order or administrative decision, especially where a law enforcement authority identifies and informs them of allegedly illegal content. [3.1]

2.      At the same time, online platforms should put in place adequate safeguards when giving effect to their responsibilities in this regard, in order to guarantee users’ right of effective remedy. [3.1]

3.      Online platforms should therefore have the necessary resources to understand the legal frameworks in which they operate.  [3.1]

4.      They should ensure that they can be rapidly and effectively contacted for requests to remove illegal content expeditiously and also in order to, where appropriate, alert law enforcement to signs of online criminal activity. [3.1]

5.      Law enforcement and other competent authorities should co-operate with another to define effective digital interfaces for fast and reliable submission of notification and to ensure efficient identification and reporting of illegal content. [3.1]

Notices from trusted flaggers

6.      Online platforms are encouraged to make use of existing networks of trusted flaggers. [3.2.1]

7.       Criteria for an entity to be considered a trusted flagger based on fundamental rights and democratic values could be agreed by the industry at EU level, through self-regulatory mechanisms or within the EU standardisation framework. [3.2.1]

8.      In a limited number of cases platforms may remove notified content without further verifying the legality of the content themselves. For these cases trusted flaggers could be subject to audit and a certification scheme.  [3.2.1] 

9.      Where there are abuses of trusted flagger mechanisms against established standards, the privilege of a trusted flagger status should be removed. [3.2.1]

Notices by ordinary users

10.   Online platforms should establish an easily an easily accessible and user-friendly mechanism allowing users to notify hosted content considered to be illegal. [3.2.2]

Quality of notices

11.     Online platforms should put in place effective mechanisms to facilitate the submission of notices that are sufficiently precise and adequately substantiated. [3.2.3]

12.   Users should not normally be obliged to identify themselves when giving a notice unless it is required to determine the legality of the content. They should be encouraged to use trusted flaggers, where they exist, if they wish to maintain anonymity.   Notice providers should have the opportunity voluntarily to submit contact details. [3.2.3]

Proactive measures by platforms

13.   Online platforms should adopt effective proactive measures to detect and remove illegal content online and not only limit themselves to reacting to notices. [3.3.1]
14.   The Commission strongly encourages online platforms to use voluntary, proactive measures aimed at the detection and removal of illegal content and to step up cooperation and investment in, and use of, automatic detection technologies. [3.3.2]

Removing illegal content

15.   Fully automated deletion or suspension of content should be applied where the circumstances leave little doubt about the illegality of the material. [4.1]

16.   As a general rule removal in response to trusted flagger notices should be addressed more quickly. [4.1]

17.   Platforms could benefit from submitting cases of doubt to a third party to obtain advice. [4.1]

18.   Platforms should report to law enforcement authorities whenever they are made aware of or encounter evidence of criminal or other offences. [4.1]

Transparency

19.   Online platforms should disclose their detailed content policies in their terms of service and clearly communicate this to their users. [4.2.1]

20.  The terms should not only define the policy for removing or disabling content, but also spell out the safeguards that ensure that content-related measures do not lead to over-removal, such as contesting removal decisions, including those triggered by trusted flaggers. [4.2.1]

21.   This should reflect both the treatment of illegal content and content which does not respect the platform’s terms of service. [4.2.1]

22.  Platforms should publish at least annual transparency reports with information on the number and type of notices received and actions taken; time taken for processing and source of notification; counternotices and responses to them. [4.2.2]

Safeguards against over-removal

23.  In general those who provided the content should be given the opportunity to contest the decision via a counternotice, including when content removal has been automated. [4.3.1]

24.  If the counter-notice has provided reasonable grounds to consider that the notified activity or information is not illegal, the platform should restore the content that was removed without undue delay or allow re-upload by the user. [4.3.1]

25.  However in some circumstances allowing for a counternotice would not be appropriate, in particular where this would interfere with investigative powers of authorities necessary for prevention, detection and prosecution of criminal offences. [4.3.1]

Bad faith notices and counter-notices

26.  Abuse of notice and action procedures should be strongly discouraged.
a.       For instance by de-prioritising notices from a provider who sends a high rate of invalid notices or receives a high rate of counter-notices. [4.3.2]
b.      Or by revoking trusted flagger status according to well-established and transparent criteria. [4.3.2]

Measures against repeat infringers

27.  Online platforms should take measures which dissuade users from repeatedly uploading illegal content of the same nature. [5.1]

28.  They should aim to effectively disrupt the dissemination of such illegal content. Such measures would include account suspension or termination. [5.1]

29.  Platforms are strongly encouraged to use fingerprinting tools to filter out content that has already been identified and assessed as illegal. [5.2]

30.  Online platforms should continuously update their tools to ensure all illegal content is captured. Technological development should be carried out in co-operation with online platforms, competent authorities and other stakeholders including civil society. [5.2] 

[Updated 19 February 2018 to add reference to Yildirim v Turkey; and 20 June 2018 to add reference to the subsequent Commission Recommendation.]


Monday, 14 August 2017

21 years of cross-border liability on the internet

The Canadian Supreme Court decision in Equustek and the French Conseil d'Etat decision to make a CJEU reference in Google v CNIL have once again focused attention on the intractable issues around cross-border liability for publication on the internet. 

This is a topic on which I have been writing and speaking off and on since 1996 when I first heard lawyers calling for an international convention to govern cross-border internet liability issues. My view then was that, given the strong inclination of each state to assert the superiority of its own laws over anyone else's, anything that 100-plus governments were able to agree on was unlikely to be good for the internet.  We were better off with continuing chaos. (See further my contribution 'Content on the Internet - Law, Regulation, Convergence and Human Rights' in International Law and The Hague's 750th Anniversary (ed. W.P. Heere. TMC Asser Press 1999.) 

Subsequent experience has, if anything, reinforced that view. And what was originally a debate about information published across borders has, not to its benefit, now become tangled up with the separate jurisdictional question of police and intelligence agency powers to obtain private user data from internet companies in other countries (see e.g. Global Commission on Internet Governance Primer on Internet Jurisdiction (PDF)). 

The root of the intractability is simple.  Unlike any previous medium, the internet is cross border by default and is used, not just as reader but as publisher, by individuals in their hundreds of millions. An internet site is, unless its operator takes positive steps to prevent it by geo-blocking, accessible in any country (excepting those that have erected national firewalls at their borders and banned VPNs). This applies not only to commercial websites but also to individual blogs, tweets, Facebook posts and the like. Accessibility also applies to search engines, which since they operate on the internet are themselves by default accessible worldwide.  

How do nation states respond?  They may accept the possibility that their citizens (or residents and visitors) can, if they try hard enough, find information on the internet that has been created under other countries' laws and which may not be lawful at home (just as when citizens travel abroad and read books not available at home). 

If states reject that possibility they end up either forcing their laws on the citizens of other countries by insisting on worldwide removal, or compelling the site or search engine to geo-block. That holds out the prospect of less permeable borders in cyberspace than existed in the pre-internet physical world (putting on one side the ugly precedents of the Berlin Wall and jamming foreign broadcasts). (See further my chapter 'Cyberborders and the Right to Travel in Cyberspace' in The Net and the Nation State (ed. Uta Kohl, CUP 2017).)  

In a submission to the Leveson Inquiry (PDF) in 2012 Max Mosley said: 
“Anyone using the internet must therefore obey the laws in their country.  Similarly, they should obey the law in countries where their posts appear. As a practical matter, it is the search engines and service providers which can best prevent breaches of the law outside the country of origin of the original post."
As the Equustek and CNIL cases illustrate, the focus is indeed now on search engines, with plaintiffs seeking to leverage their gatekeeper potential not just domestically but on a worldwide basis. 

But Mr Mosley's beguiling proposition begs the same questions that have been asked since the 1990s: should the mere fact that a post appears in another country be enough to trigger the law of that country when the default setting built into the internet is cross-border accessibility? Does it accord with reasonable expectations to require individual bloggers and social media posters to comply with the laws of all countries? Does such a rule strike the right balance from the point of view of readers worldwide, bearing mind the resulting incentive to apply the most restrictive common denominator? My answer is no to all three questions. (See further my September 2012 submission (PDF) to the Leveson Inquiry commenting on Max Mosley's proposal.)

In one respect we have made progress since 1996. In an increasing number of subject matter areas a targeting test has been held (at least within the EU) to define the territorial scope of a right. Targeting rules hold out the prospect of something approaching a peaceful co-existence regime. Properly formulated and applied, a targeting test (a) lays down that mere accessibility does not trigger the laws or jurisdiction of another country and (b) requires relevant positive conduct, not mere omission, in order to do so. (See further my articles Directing and Targeting: The Answer to the Internet's Jurisdiction Problems Computer Law Review International, 2004 and Here, There or Everywhere? Cross-border Liability on the Internet (Computer and Telecommunications Law Review, 2007 C.T.L.R. 41).

However, the furore that periodically erupts around cross-border internet cases shows that there is still little consensus on these issues. Nuanced approaches may be at greatest risk of being jettisoned when the law in question is said to embody a core value of the state asked to adopt an expansive jurisdictional stance. That is also the time when greatest care should be taken not to let enthusiasm for the perceived merits of domestic law override respect for the different laws of other countries and the principle of peaceful co-existence.

Monday, 17 July 2017

Worldwide search de-indexing orders: Google v Equustek

The Supreme Court of Canada has issued its decision in Google Inc v Equustek (28 June 2017). This is the case in which a small Canadian technology company, Equustek, asked the Canadian courts to grant an injunction against the well-known US search engine ordering it to de-index specified websites - not just on its Canadian domain google.ca, but on a worldwide basis. The injunction was an interim order pending trial of Equustek’s action against the operator of the websites. The SCC (by a 7-2 majority) dismissed Google’s appeal and upheld the injunction.

According to taste and point of view the decision is:

(a) a victory for a small Canadian company against a US tech giant

(b) a damaging precedent for future overreaching assertions of extraterritorial jurisdiction by other nation states

(c) a narrowly decided case about interim injunctions with few broad implications

(d) a case that paid insufficient attention to its underlying territorial moorings

(e) a decision that reinforces the role that online intermediaries can and should play in combating unlawful activities

(f) a case heavily influenced by the unattractive behaviour of the operator of the impugned websites, which lays down little in the way of principle to guide future cases

(g) an uncontroversial and well-reasoned illustration of the circumstances in which a court may make orders with extraterritorial effect

(h) another nail in the coffin of worldwide freedom of expression

(i) a pointless exercise in applying national law to the inherently borderless internet.

Commentators critical and supportive (here, here, here, here, here, here and here) have begun to dissect the decision. Some reaction has been less than nuanced. One tweet asked what business a national court had upholding a global injunction, as if no national court had ever issued an injunction with extraterritorial effect before.

Courts have long considered themselves able to grant extraterritorial injunctions. However, out of concern for offending the sensibilities of other territorially sovereign states (comity) they have tended to exercise caution about the circumstances in which they should do so, about the extent of the injunction and to pay close attention to safeguards designed to minimise any possible international conflict.

The difference between Equustek and previous kinds of world-wide injunction (such as asset-freezing orders) is of course the internet. That distinction cuts both ways. In one direction (emphasised by the SCC) it may be said that a worldwide medium requires a worldwide remedy if it is to be effective. In the other direction the internet, as a cross-border vehicle for speech, amplifies and broadens the extraterritorial impact of injunctions aimed at online activity. From that perspective national courts should be more, not less, cautious about extraterritorial effects on the internet.

Comity urges sensitivity to the concerns of other states, including a state's interest in protecting the rights of its own citizens. The British Columbia Court of Appeal judgment in Equustek adopted this description of comity in the Canadian case of Spencer v The Queen:
"Comity” in the legal sense, is neither a matter of absolute obligation, on the one hand, nor of mere courtesy and good will, upon the other. But it is the recognition which one nation allows within its territory to the legislative, executive or judicial acts of another nation, having due regard both to international duty and convenience and to the rights of its own citizens or other persons who are under the protection of its laws ….' (emphasis added)
However in the context of modern day international human rights we are concerned not only with the sensibilities of other states as proxies for their citizens, but with directly respecting the fundamental rights of internet users in other countries. Typically in internet cases those will be rights to privacy (as in the US Microsoft Warrant case) or freedom of expression (as in Equustek). The fundamental rights of internet users are a separate matter from the sensibilities of a nation state. To focus only on state sensitivities risks overlooking or understating the distinct interests of their citizens. (For more on this topic see my chapter in the recently published book ‘The Net and the Nation State’.)

Cases on extraterritorial injunctions tend to resolve themselves into questions not of whether a court has the power to make an extraterritorial order, but whether it should exercise that power and if so how. That is a question of discretion, involving any applicable principles on which discretion should be exercised, and voluntary jurisdictional self-restraint. When faced with a bad actor, an ugly set of facts and a demand for an effective remedy it is all the more important that a court should anxiously examine the basis for exercising its power and carefully identify and balance competing factors, even – perhaps especially - where the internet is concerned.

Whatever the future significance of the Equustek decision (a rich source of jurisprudence or a barren seed destined for obscurity) the factual background to the case is unusual and provides illuminating context for the way in which the Supreme Court of Canada approached the case. (Caveat: my comments are from the perspective of an English lawyer, with no particular knowledge of Canadian law.)

Background and context


The story starts fairly conventionally when Equustek, a manufacturer of electronic networking products, fell out with its distributor Datalink Technologies Gateways Inc ('Datalink'), which then operated from Vancouver. Equustek claimed that for many years Datalink had been re-labelling one of Equustek's products and passing it off as Datalink's own; that Datalink then acquired confidential information and misused it to design and manufacture a competing product; and that Datalink then passed off the competing product by supplying it in substitution for Equustek products advertised on its websites. Equustek terminated the distribution agreement and in April 2011 started litigation in British Columbia against Datalink and its principal.

Initially Datalink defended the proceedings. However the complexion of the dispute changed in 2012 with Datalink abandoning its defence, skipping the jurisdiction, setting up numerous shell companies, operating multiple websites and breaching various orders made by the Canadian courts.

Whatever may have been the merits (or not) of its original defence, Datalink now exhibited the demeanour of a fugitive from justice. Among the injunctions granted against Datalink during 2012 was an order freezing Datalink's assets worldwide. In September 2012 Equustek applied for Datalink and its principal to be found in contempt of court. The Canadian court issued a warrant for the arrest of the principal.

The defences of two of the defendants were struck out in June 2012 for failure to comply with court orders (and the third in March 2013). As the first instance judge (Fenlon J.) observed they were therefore presumed to admit the allegations against them. Although Equustek was given permission in June 2012 to apply for final judgment against Datalink it did not do so. As a consequence the interim orders made by the Canadian court continued in force.

Despite all this Datalink continued, according to the Supreme Court judgment, to carry on business from an unknown location, selling its impugned product on its websites to customers all over the world.

Google entered the picture in September 2012 when Equustek asked it to de-index the Datalink websites. Google refused, following which Equustek applied for an order requiring it to do so.

Google then told Equustek that if it obtained an order against Datalink prohibiting it from carrying on business on the internet, Google would remove specific webpages (but not, in accordance with its internal policy, entire websites).

Shortly afterwards, in December 2012, Equustek (supported by Google) obtained from the Canadian court an injunction against Datalink ordering it to "cease operating or carrying on business through any website". The SCC judgment does not state in terms whether this injunction was itself worldwide. However in the context of Datalink having moved its activities outside Canada, it would be unsurprising if the order were understood to include Datalink websites operated from outside Canada, and not limited to the .ca domain. In any event that is the implication of statements in the judgments that Datalink's activities outside Canada had breached that order.

Parenthetically, a previous judgment of the Canadian Supreme Court (Pro Swing) counsels the need to be explicit about the territorial scope of injunctions when dealing with the internet and territorially defined rights such as trade marks:
"The Internet component does not transform the US trademark protection into a worldwide one. … 
Extraterritoriality and comity cannot serve as a substitute for a lack of worldwide trademark protection. The Internet poses new challenges to trademark holders, but equitable jurisdiction cannot solve all their problems. In the future, when considering cases that are likely to result in proceedings in a foreign jurisdiction, judges will no doubt be alerted to the need to be clear as regards territoriality. Until now, this was not an issue because judgments enforcing trademark rights through injunctive relief were, by nature, not exportable."
Following the December 2012 order against Datalink Google voluntarily removed specific webpages from its .ca search results. Equustek became aware of the limitation to google.ca in May 2013 as the result of cross-examining a Google witness (1st instance judgment [75]).

The order against Datalink requiring it to cease carrying on business on the internet was, as the dissenting judgment in the Supreme Court points out, wider than Equustek's underlying claim against Datalink. That claim was for relief against specific aspects of Datalink's business: using Equustek's trade marks and free-riding on the goodwill of any Equustek product on any website, disparaging or in any way referring to Equustek products, distributing certain manuals and displaying images of Equustek's products on any website; and selling a named line of products alleged to have been created by the theft (sic) of Equustek's trade secrets.

But in the application against Google the effective complaint about Datalink moved away from Equustek's underlying infringement claims. The basis of the decision to grant an order against Google was that Datalink, by continuing business on the internet, was breaching the existing wide interim injunction – an order obtained with the support of Google and which, the Supreme Court says [SCC 34], Google had offered to comply with voluntarily. Third parties with notice of an interim injunction can be treated as if bound by it [SCC 29, 33]. The claim for a de-indexing injunction against Google was therefore for an order piggybacked on a pre-existing broad and, it seems, worldwide injunction against Datalink.

The significance of the order requiring Datalink to cease carrying on business on the internet can be seen at all three judicial levels. Fenlon J. at first instance said that the plaintiffs sought the injunction against Google to prevent continued and flagrant breaches of the court's orders in the underlying action [1st inst. 86]. The BC Court of Appeal described the injunction claimed against Google as 'ancillary relief designed to ensure that orders already granted against the defendants are effective' (emphasis added) [BCCA 2].

The Supreme Court emphasised that in the absence of de-indexing the sites Google was facilitating Datalink’s breach of the order “by enabling it to carry on business through the Internet” (emphasis added). [SCC 34] The de-indexing injunction against Google was said to flow from the necessity of Google’s assistance to prevent the facilitation of Datalink’s ‘ability to defy court orders and do irreparable harm to Equustek’ (emphasis added) [SCC 35].

The specifics of Equustek’s underlying complaints against Datalink and (pertinently, in a case where the appropriateness of a worldwide injunction was in issue) the extent to which they may have been based on territorially limited Canadian rights received relatively little attention.

The fact that an underlying claim is territorially limited does not mean that interim ancillary relief must be similarly limited. Otherwise it would not be possible to grant a worldwide asset-freezing injunction in a case based on a territorially limited right. However, as the minority judgment pointed out, a de-indexing injunction differs from an asset freezing injunction (the rationale for which is to maintain the integrity of the court's process [1st instance 132]) in that it enforces a plaintiff's asserted substantive rights [SCC 72].

In principle any consideration of whether to grant a worldwide de-indexing injunction against an intermediary ought therefore to take into account the nature and territorial extent of the claims made and rights asserted against the alleged wrongdoer. All the interests potentially affected by the grant of the injunction can then be identified and weighed. 

It does not necessarily follow that if the SCC had approached these issues differently the outcome would have changed significantly, or indeed at all. This was after all an appeal against an exercise of the court's discretion, which is entitled to a high degree of deference [SCC 22]. And the underlying defendant’s flouting of court orders was always likely to weigh heavily.

Nevertheless, the minority two judges of the SCC considered that the majority seven judges had not exercised sufficient judicial restraint. They would not have made the order at all. At any event a closer analysis of extraterritoriality might provided a more detailed foundation on which to consider different factual situations in future cases.

The SCC's reasoning



My focus is on three aspects of the SCC's reasoning:
  • The range of interests engaged by an extraterritorial de-indexing injunction
  • Territoriality of underlying claims against Datalink; and
  • Approach to freedom of speech rights.


Interests engaged by a worldwide de-indexing injunction


The SCC discussed worldwide injunctions against offline intermediaries and domestic injunctions against online intermediaries such as ISP site blocking orders. These precedents, however, do not fully address the issues that arise with a global de-indexing injunction against a search engine.

Under existing caselaw ancillary orders may be granted that affect offline intermediaries such as banks. Such an order can cover freezing of a defendant’s assets and disclosure of information identifying bank accounts and about their contents. Both elements can be granted on a worldwide basis. A third party such a bank can be required to assist.

The courts have been at pains both to respect the position of the intermediary as a party not accused of wrongdoing and to minimise the possibility of conflict with foreign law. Thus the standard form English worldwide asset freezing injunction contains several safeguards:

- The Babanaft proviso. The order is only to affect a third party in a foreign country to the extent that the order is declared enforceable by, or is enforced by, a court in that country.

- An undertaking by the applicant not, without the permission of the court, to seek to enforce the order in any country outside England and Wales.

- The Baltic proviso. Nothing in the order, in respect of assets located outside England and Wales, prevents any third party (whether within or outside the jurisdiction) from complying with what it reasonably believes to be its obligation, contractual or otherwise, under the laws and obligations of the country or state in which those assets are situated; and any orders of the courts of that country or state.

These provisions achieve three things. The Babanaft proviso minimises potential comity and conflict of law problems by making the order conditional on enforcement in the foreign court; the undertaking enables the English court to prevent the applicant taking oppressive enforcement action in a foreign country; and the Baltic proviso recognises that even a third party within England and Wales (such as a London bank with a foreign branch where an account is held) ought not to be compelled to do something contrary to foreign law or court order.

The banking system, like the internet, is international. But the courts have recognised, even faced with hard cases such as alleged fraud, the need to pay careful attention to balancing the various state and non-state interests involved.

Asset freezing orders affect fewer interests than does a worldwide de-indexing injunction. A freezing injunction typically affects only the claimant, the defendant and its assets, any third party (such as a bank) that may be holding that defendant’s assets, and potentially the sovereignty of any state in which those assets may be held.

A worldwide de-indexing injunction against a search engine engages a new category: the millions of people around the world who would otherwise be able to seek out the material. This is novel territory, engaging a different kind of interest: freedom of speech at scale.

In Equustek the first instance judge Fenlon J. considered whether a Baltic proviso should be inserted in the order, but decided it was not necessary:
"In the present case, Google is before this Court and does not suggest that an order requiring it to block the defendants' websites would offend California law, or indeed the law of any state or country from which a search could be conducted. Google acknowledges that most countries will likely recognise intellectual property rights and view the selling of pirated products as a legal wrong." [1st inst. 144] 
"Google was named in this application, served with materials, and attended the hearing. It is not therefore necessary to craft terms anticipating possible conflicts Google could face in complying with the interim injunction. No terms of this kind have been requested by Google and I see no basis on the record before me to expect such difficulties." [1st inst. 160]

The BC Court of Appeal added that "in the unlikely event that any jurisdiction finds the order offensive to its core values, an application could be made to court to modify the order so as to avoid the problem." [BCCA 94]

The SCC similarly relied on Google's ability to apply to modify the order:

"If Google has evidence that complying with such an injunction would require it to violate the laws of another jurisdiction, including interfering with freedom of expression, it is always free to apply to the British Columbia courts to vary the interlocutory order accordingly. To date, Google has made no such application" [SCC 46]
And

"In the absence of an evidentiary foundation, and given Google’s right to seek a rectifying order, it hardly seems equitable to deny Equustek the extraterritorial scope it needs to make the remedy effective, or even to put the onus on it to demonstrate, country by country, where such an order is legally permissible. We are dealing with the Internet after all, and the balance of convenience test has to take full account of its inevitable extraterritorial reach when injunctive relief is being sought against an entity like Google." (emphasis added) [SCC 47]
Unlike the Babanaft and Baltic provisos this places the burden on the third party to demonstrate that compliance with the injunction would place it in conflict with another state's laws, rather than crafting the injunction so as to minimise the risk of the third party being placed in that position in the first place. Of course asset freezing orders have to anticipate in a vacuum potential difficulties for third parties, since they are made without the presence of the third party in court. In Equustek Google was, as Fenlon J commented, before the court. Whether that is a good basis on which to shift the burden to the third party is a question that will no doubt be revisited in future cases.

The SCC was sceptical of the possibility of conflicts with other jurisdictions' laws. First it observed that there was no harm to Google because it did not have to take steps around the world, but only where its search engine was controlled. [SCC 43]

It went on:

"Google’s argument that a global injunction violates international comity because it is possible that the order could not have been obtained in a foreign jurisdiction, or that to comply with it would result in Google violating the laws of that jurisdiction is, with respect, theoretical. As Fenlon J. noted, 'Google acknowledges that most countries will likely recognize intellectual property rights and view the selling of pirated products as a legal wrong.'" [SCC 44]
The passage refers to comity. It does not address the new element introduced by a worldwide de-indexing injunction, namely potential interference with freedom of speech rights of individual internet users in other countries. This is different from the question of whether the injunction might require the search engine to violate the law of another state. It is quite possible for a search engine’s compliance with an injunction to inhibit users’ access to a website (and thus engage their freedom of speech rights) in another country without the search engine itself contravening the law of that country.

Nor does the passage distinguish between the abstract concept of a state recognising intellectual property rights and the question of what specific rights Equustek may or may not have possessed outside Canada, to which I now turn.

Territoriality of underlying claims


As to territoriality of Equustek's underlying claims, one of the claims was for passing off. A claim for passing off has to be supported by goodwill, which is territorial. A Canadian company doing business in Canada will own goodwill in Canada, plus in any other countries in which it does business. The geographic extent of the plaintiff’s business determines the geographic extent of its rights. The plaintiff does not automatically have worldwide rights.

Equustek’s other main claim against Datalink was for breach of confidence. Unlike passing off that, at least in English law, is not in its nature a territorially delimited right. It is an equitable obligation owed personally rather than something in the nature of a property right. On the assumption that most countries recognise breach of confidence as a cause of action, that might provide a stronger foundation for a worldwide remedy than for passing off.

The SCC did not analyse these separate causes of action, no doubt because its emphasis was on breach of the court order preventing Datalink from carrying on business on the internet rather than on Equustek's underlying claims against Datalink.

Instead the SCC concentrated on the harm that Datalink's various internet-based wrongdoings were causing to Equustek on a worldwide basis, with no analysis of the geographical extent of Equustek’s business or rights:
“The problem in this case is occurring online and globally. The Internet has no borders — its natural habitat is global. The only way to ensure that the interlocutory injunction attained its objective was to have it apply where Google operates — globally. As Fenlon J. found, the majority of Datalink’s sales take place outside Canada. If the injunction were restricted to Canada alone or to google.ca, as Google suggests it should have been, the remedy would be deprived of its intended ability to prevent irreparable harm. Purchasers outside Canada could easily continue purchasing from Datalink’s websites, and Canadian purchasers could easily find Datalink’s websites even if those websites were de-indexed on google.ca. Google would still be facilitating Datalink’s breach of the court’s order which had prohibited it from carrying on business on the Internet.” [SCC 41]
This passage justifies the worldwide nature of the injunction on two separate and distinct grounds.

The first is that purchasers outside Canada could continue to purchase from Datalink's websites. The SCC tells us ([16], [41]) that the majority of Datalink’s sales were to purchasers outside Canada. The first instance judge said that Datalink’s sales originated primarily in other countries, “so the court’s process cannot be protected unless the injunction ensures that searchers from any jurisdiction do not find [Datalink’s] websites”. [SCC 19]

Whilst from one perspective this factual context may be seen as reinforcing the need for extraterritorial relief, from another the intention to prevent searchers from countries other than Canada accessing Datalink’s sites raises the question whether any of Equustek’s rights were territorially limited and whether it had any rights outside Canada.

None of the three judgments (first instance, BC Court of Appeal, Supreme Court) contains an explanation of whether Equustek claimed to have rights worldwide, and if so what rights. (One of the previous judgments in the underlying litigation between Equustek and Datalink refers to a predecessor company of Datalink having at some time previously distributed Equustek's products in Canada and the USA.)

The closest is a comment by Fenlon J. at first instance that the French case of Max Mosely was distinguishable on the basis that publication of the images in that case was a breach of the French penal code and was not a breach of the laws of other countries. The implication is that in Equustek there was a breach of other countries' laws. If Equustek did have a worldwide business and rights outside Canada that is not made clear in the judgments.  

There may also be an implication from the suggestion in the SCC judgment that Equustek was being harmed worldwide that Equustek had business or underlying rights worldwide. But nowhere is that made explicit.


The second ground on which the passage quoted above justifies the worldwide nature of the injunction is that purchasers inside Canada could easily find Datalink's sites on Google search domains other than .ca. At first instance Fenlon J. also concluded that to be effective even within Canada Google had to block search results on all its websites. [SCC 40]

This ground is implicitly based only on Equustek's domestic rights in Canada. That could raise the question of how far a court should strive to prevent local users' access to a foreign website without first considering whether such a site is targeted at its country. There is a trend in intellectual property to hold that a foreign site does not by virtue of its visibility alone substantively infringe (say) a domestic trade mark. Targeting the jurisdiction is also required.

More generally, an application for an extraterritorial de-indexing injunction based purely on domestic rights is likely to be treated with more circumspection than one underpinned by an express showing of worldwide rights.

Approach to freedom of speech rights


The third aspect of the SCC judgment of particular interest is its approach to freedom of expression.  

The SCC, while acknowledging the significance of freedom of expression, in this case saw it as weakly engaged:

"And while it is always important to pay respectful attention to freedom of expression concerns, particularly when dealing with the core values of another country, I do not see freedom of expression issues being engaged in any way that tips the balance of convenience towards Google in this case." [SCC 45]
How did the SCC reach this conclusion?

The SCC effectively sidelined the question of individual freedom of speech rights by equating freedom of speech with comity. It endorsed the reasoning of the BC Court of Appeal, which had said:

"In the case before us, there is no realistic assertion that the judge’s order will offend the sensibilities of any other nation. It has not been suggested that the order prohibiting the defendants from advertising wares that violate the intellectual property rights of the plaintiffs offends the core values of any nation." [BCCA 93]
However this passage frames a fundamental rights question (would this remedy engage the freedom of speech rights of internet users in country X?) as a purely state-centric comity issue (would this remedy offend the sensibilities of State X?).

The SCC appears to address that comity question by asking not whether Equustek had any relevant rights in other countries, but whether at some abstract level another state might be offended by a Canadian court enforcing the plaintiff's intellectual property rights.

This brings us back to the point that intellectual property rights are mostly local, not global. A UK right is separate from a US right and so on. For many IP rights (particularly trade marks and patents) we cannot assume that because someone owns a right in country A it owns a corresponding right in country B. Often they do not. Some IP rights (such as copyright) do come closer to being global, since states party to international treaties undertake to grant automatic reciprocal rights to other states’ nationals. Even then the content and scope of the rights will differ from one country to another.

So if, for instance, Equustek had trade mark rights or goodwill only in Canada (which, to repeat, the SCC judgment does not discuss) then the grant of a worldwide de-indexing injunction would inhibit users in other countries from accessing sites which, as a matter of the trade mark law of those countries, they would be entitled to see. That would strongly engage their individual freedom of expression rights. Even viewed purely as a comity question, might that not offend the sensibilities of that state?

The territorial nature of most IP rights also provides perspective for the comment of the SCC that it hardly seemed equitable to put the onus on Equustek to demonstrate, country by country, where an extraterritorial order is legally permissible.

If a plaintiff seeks a worldwide order against a non-wrongdoer third party that engages the fundamental rights of millions of internet users around the world, why (it might be asked) should it not have to make some kind of showing that it has rights in those other countries that could underpin the grant of such relief?

It must be acknowledged (again) that the territoriality of trade marks and passing off is a different matter from territoriality of breach of confidence. One can envisage arguments that breach of confidence, as a personally owed equitable obligation, might require less of a showing of rights in other countries than the inherently territorial passing off.

However the SCC judgment undertakes no analysis of that kind, perhaps because (as already discussed) the real foundation of the de-indexing injunction against Google was not Equustek’s underlying claims against Datalink at all, but the broad and apparently worldwide order against Datalink to cease carrying on business through any website.

Indeed the BC Court of Appeal, commenting on freedom of expression concerns raised not only by Google but also by intervenors the Canadian Civil Liberties Association and the Electronic Frontier Foundation, said:

"The order made in this case is an ancillary order designed to give force to earlier orders prohibiting the defendants from marketing their product. Those orders were made after thorough consideration of the strength of the plaintiffs' and defendants' cases. Google does not suggest that the orders made against the defendants were inappropriate, nor do the intervenors suggest that those orders constituted an inappropriate intrusion on freedom of speech." (emphasis added) [BCCA 109]
Nevertheless, in terms of justifying the impact of the claimed relief on freedom of speech rights of internet users in other countries it might be thought that in such a case the more relevant consideration is not the breadth of a pre-existing order made against the defendant by the domestic court, but whether the plaintiff has underlying rights in those countries capable of justifying the interference.

The SCC went further and said that the de-indexing order did not engage freedom of expression values at all:

"This is not an order to remove speech that, on its face, engages freedom of expression values, it is an order to de-index websites that are in violation of several court orders. We have not, to date, accepted that freedom of expression requires the facilitation of the unlawful sale of goods." [SCC 48]
Whatever the position in Canadian law, as a matter of international human rights law this passage elides the concepts of engagement and balancing of fundamental rights. The de-indexing order inhibits the ability of users to access and read the websites in question. That plainly engages the right of freedom of expression. For copyright that is established by the European Court of Human Rights Ashby decision.

The question of whether the interests of the plaintiff should outweigh an infringer's (or in this case an internet user's) right of free expression is a matter of necessity, proportionality and balancing the respective rights. Only exceptionally would an infringer's right to freedom of expression outweigh that of the IP rightsowner (although it is possible). But that does not mean that the right of freedom of expression is not engaged. Once engaged, the need to justify the interference leads inexorably to the question of what, if any, rights, Equustek may have owned in countries in which users' access to Datalink's sites would be inhibited.

The SCC did go on to say that “Even if it could be said that the injunction engages freedom of expression issues, this is far outweighed by the need to prevent the irreparable harm that would result from Google’s facilitating Datalink’s breach of court orders.” [SCC 49] It is difficult to comment on this since the SCC evaluated the weight of the freedom of expression issues not in terms of the possible effect on individual users, but as a matter of state sensibilities.

In contrast, Fenlon J. at first instance identified the interests of internet users as a relevant factor in assessing the balance of convenience. Again this was with emphasis on the prexisting broad Canadian court order against Datalink rather than what underlying rights Equustek might have had in other countries:

"To this list of considerations I would add the degree to which interests of those other than the applicant and the identified non-party could be affected – here potential purchasers will not be able to find and buy the defendant's products as easily, but that is as it should be in light of the existing court orders prohibiting the defendants from selling the GW1000 and related products." (emphasis added) [1st inst. 155]

The dissenting judgment


The SCC upheld the de-indexing injunction by a 7-2 majority. The minority identified five factors that in their view told in favour of exercising judicial restraint and declining to make the order. Among the concerns they identified was lack of effectiveness, in that Datalink's websites could be found by other means whether Google searches listed them or not. This suggested restraint in granting the de-indexing injunction. Effectiveness was also related to worldwide effect, in that "the quest for elusive effectiveness led to the [de-indexing order] having worldwide effect." While the worldwide effect of the order did not make it more effective, it could raise questions of comity.

The minority were also concerned that although in form an application for interim relief, in substance it would effectively be a permanent injunction against a party that had neither acted unlawfully nor aided and abetted illegal action. It gave Equustek broader relief than it has claimed substantively against Datalink. The order was mandatory and would require court supervision (for instance in updating the list of websites to be de-indexed). The minority also considered that Equustek had alternative remedies available against Datalink in another jurisdiction.

Overall the minority considered that the majority had slipped too easily outside the constraints of settled doctrine and practice.


Conclusion


The path that led to the SCC judgment was factually convoluted and dominated by the behaviour of the underlying defendant. The central role played by the pre-existing, apparently worldwide, order requiring Datalink to cease doing business on the internet is striking. If for no other reason, the case may come to be seen as one very much on its own facts.

Where an apparent bad actor thumbs its nose at the court’s authority it is perhaps unsurprising that if a well-resourced global intermediary is haled into court, apparently able to take steps to mitigate damage to the plaintiff at little inconvenience to itself, the tribunal may (if satisfied that it has the power) be inclined to enlist its assistance.

Nevertheless if a future court should contemplate a similar order then a more detailed identification of the rights and interests involved, analysis of any territorial aspects of those rights and consideration of the freedom of speech rights of internet users separate from the sensibilities of states may be key to arriving at an appropriate outcome.