Thursday, 18 April 2019

Users Behaving Badly – the Online Harms White Paper

Last Monday, having spent the best part of a day reading the UK government's Online Harms White Paper, I concluded that if the road to hell was paved with good intentions, this was a motorway.

Nearly two weeks on, after full and further consideration, I have found nothing to alter that view. This is why.

The White Paper

First, a reminder of what the White Paper proposes. The government intends to legislate for a statutory ‘duty of care’ on social media platforms and a wide range of other internet companies that "allow users to share or discover user-generated content, or interact with each other online". This could range from public discussion forums to sites carrying user reviews, to search engines, messaging providers, file sharing sites, cloud hosting providers and many others. 

The duty of care would require them to “take more responsibility for the safety of their users and tackle harm caused by content or activity on their services”. This would apply not only to illegal content and activities, but also to lawful material regarded as harmful.

The duty of care would be overseen and enforced by a new regulator, Ofweb, armed with power to fine companies for non-compliance.

Ofweb would set out rules in Codes of Practice that the intermediary companies should follow to comply with their duty of care. For terrorism and child sexual abuse material the Home Secretary would have direct control over the relevant Codes of Practice.

Users would get a guaranteed complaints mechanism to the intermediary companies. The government is consulting on the possibility of appointing designated organisations who would be able to make ‘super-complaints’ to the regulator.

Whilst framed as regulation of tech companies, the White Paper’s target is the activities and communications of online users. Ofweb would regulate social media and internet users at one remove. It would be an online sheriff armed with the power to decide and police, via its online intermediary deputies, what users can and cannot say online.

Which lawful content would count as harmful is not defined. The White Paper provides an ‘initial’ list of content and behaviour that would be in scope: cyberbullying and trolling; extremist content and activity; coercive behaviour; intimidation; disinformation; violent content; advocacy of self-harm; promotion of Female Genital Mutilation (FGM).

This is not a list that could readily be transposed into legislation, even if that were the government’s intention. Some of the topics - FGM, for instance – are more specific than others. But most are almost as unclear as ‘harmful’ itself. For instance the White Paper gives no indication as to what would amount to trolling. It says only that ‘cyberbullying, including trolling, is unacceptable’. It could as well have said ‘behaving badly is unacceptable’.

In any event the White Paper leaves the strong impression that the legislation would eschew even that level of specificity and build the regulatory structure simply on the concept of ‘harmful’.

The White Paper does not say in terms how the ‘initial’ list of content and behaviour in scope would be extended. It seems that the regulator would decide:

“This list is, by design, neither exhaustive nor fixed. A static list could prevent swift regulatory action to address new forms of online harm, new technologies, content and new online activities.” [2.2]
In that event Ofweb would effectively have the power to decide what should and should not be regarded as harmful.

The White Paper proposes some exclusions: harms suffered by companies as opposed to individuals, data protection breaches, harms suffered by individuals resulting directly from a breach of cyber security or hacking, and all harms suffered by individuals on the dark web rather than the open internet.

Good intentions

The White Paper is suffused with good intentions. It sets out to forge a single sword of truth and righteousness with which to assail all manner of online content from terrorist propaganda to offensive material.

However, flying a virtuous banner is no guarantee that the army is marching in the right direction. Nor does it preclude the possibility that specialised units would be more effective.

The government presents this all-encompassing approach as a virtue, contrasted with:

“a range of UK regulations aimed at specific online harms or services in scope of the White Paper, but [which] creates a fragmented regulatory environment which is insufficient to meet the full breadth of the challenges we face” [2.5].
An aversion to fragmentation is like saying that instead of the framework of criminal offences and civil liability, focused on specific kinds of conduct, that make up our mosaic of offline laws we should have a single offence of Behaving Badly.

We could not contemplate such a universal offence with equanimity. A Law against Behaving Badly would be so open to subjective and arbitrary interpretation as to be the opposite of law: rule by ad hoc command. Assuredly it would fail to satisfy the rule of law requirement of reasonable certainty. By the same token we should treat with suspicion anything that smacks of a universal Law against Behaving Badly Online.

In placing an undefined and unbounded notion of harm at the centre of its proposals for a universal duty of care, the government has set off down that path.

Three degrees of undefined harm

Harm is an amorphous concept. It changes shape according to the opinion of whoever is empowered to apply it: in the government’s proposal, Ofweb.

Even when limited to harm suffered by an individual, harm is an ambiguous term. It will certainly include objectively ascertainable physical injury – the kind of harm to which comparable offline duties of care are addressed.

But it may also include subjective harms, dependent on someone’s own opinion that they have suffered what they regard as harm. When applied to speech, this is highly problematic. One person may enjoy reading a piece of searing prose. Another may be distressed. How is harm, or the risk of harm, to be determined when different people react in different ways to what they are reading or hearing? Is distress enough to render something harmful? What about mild upset, or moderate annoyance? Does offensiveness inflict harm? At its most fundamental, is speech violence? 

‘Harm’ as such has no identifiable boundaries, at least none that would pass a legislative certainty test.

This is particularly evident in the White Paper’s discussion of Disinformation. In the context of anti-vaccination the White Paper notes that “Inaccurate information, regardless of intent, can be harmful”.

Having equated inaccuracy with harm, the White Paper contradictorily claims that the regulator and its online intermediary proxies can protect users from harm without policing truth or accuracy:

“We are clear that the regulator will not be responsible for policing truth and accuracy online.” [36] 
“Importantly, the code of practice that addresses disinformation will ensure the focus is on protecting users from harm, not judging what is true or not.” [7.31]
The White Paper acknowledges that:
“There will be difficult judgement calls associated with this. The government and the future regulator will engage extensively with civil society, industry and other groups to ensure action is as effective as possible, and does not detract from freedom of speech online” [7.31]
The contradiction is not something that can be cured by getting some interested parties around a table. It is the cleft stick into which a proposal of this kind inevitably wedges itself, and from which there is no escape.

A third variety of harm, yet more nebulous, can be put under the heading of ‘harm to society’. This kind of harm does not depend on identifying an individual who might be directly harmed. It tends towards pure abstraction, malleable at the will of the interpreting authority.

Harms to society feature heavily in the White Paper, for example: content or activity that:

“threatens our way of life in the UK, either by undermining national security, or by reducing trust and undermining our shared rights, responsibilities and opportunities to foster integration.”
Similarly:
“undermine our democratic values and debate”;

“encouraging us to make decisions that could damage our health, undermining our respect and tolerance for each other and confusing our understanding of what is happening in the wider world.”
This kind of prose may befit the soapbox or an election manifesto, but has no place in or near legislation.

Democratic deficit

One particular concern is the potential for a duty of care supervised by a regulator and based on a malleable notion of harm to be used as a mechanism to give effect to some Ministerial policy of the day, without the need to obtain legislation.

Thus, two weeks before the release of the White Paper Health Secretary Matt Hancock suggested that anti-vaxxers could be targeted via the forthcoming duty of care.

The White Paper duly recorded, under “Threats to our way of life”, that “Inaccurate information, regardless of intent, can be harmful – for example the spread of inaccurate anti-vaccination messaging online poses a risk to public health.” [1.23]

If a Secretary of State decides that he wants to silence anti-vaxxers, the right way to go about it is to present a Bill to Parliament, have it debated and, if Parliament agrees, pass it into law. The structure envisaged by the White Paper would create a channel whereby an ad hoc Ministerial policy to silence a particular group or kind of speech could be framed as combating an online harm, pushed to the regulator then implemented by its online intermediary proxies. Such a scheme has democratic deficit hard baked into it.

Perhaps in recognition of this, the government is consulting on whether Parliament should play a role in developing or approving Ofweb’s Codes of Practice. That, however, smacks more of sticking plaster than cure.

Impermissible vagueness

Building a regulatory structure on a non-specific notion of harm is not a matter of mere ambiguity, where some word in an otherwise unimpeachable statute might mean one thing or another and the court has to decide which it is. It strays beyond ambiguity into vagueness and gives rise to rule of law issues.

The problem with vagueness was stated was spelt out by the House of Lords in R v Rimmington, citing the US case of Grayned:

"Vagueness offends several important values … A vague law impermissibly delegates basic policy matters to policemen, judges and juries for resolution on an ad hoc and subjective basis, with the attendant dangers of arbitrary and discriminatory application."
Whilst most often applied to criminal liability, the objection to vagueness is more fundamental than that. It is a constitutional principle that applies to the law generally. Lord Diplock referred to it in a 1975 civil case (Black-Clawson):
"The acceptance of the rule of law as a constitutional principle requires that a citizen, before committing himself to any course of action, should be able to know in advance what are the legal consequences that will flow from it."
Certainty is a particular concern with a law that has consequences for individuals' speech. In the context of a social media duty of care the rule of law requires that users must be able to know with reasonable certainty in advance what of their speech is liable to be the subject of preventive or mitigating action by a platform operator subject to the duty of care.

If the duty of care is based on an impermissibly vague concept such as ‘harm’, then the legislation has a rule of law problem. It is not necessarily cured by empowering the regulator to clothe the skeleton with codes of practice and interpretations, for three reasons: 

First, impermissibly vague legislation does not provide a skeleton at all – more of a canvas on to which the regulator can paint at will; 

Second, if it is objectionable for the legislature to delegate basic policy matters to policemen, judges and juries it is unclear why it is any less objectionable to do so to a regulator; 

Third, regulator-made law is a moveable feast.

All power to the sheriff

From a rule of law perspective undefined harm ought not to take centre stage in legislation.

However if the very idea is to maximise the power and discretion of a regulator, then inherent vagueness in the legislation serves the purpose very well. The vaguer the remit, the more power is handed to the regulator to devise policy and make law.

John Humphrys, perhaps unwittingly, put his finger on it during the Today programme on 8 April 2019 
(4:00 onwards). Joy Hyvarinen of Index on Censorship pointed out how broadly Ofcom had interpreted harm in its 2018 survey, to which John Humphrys retorted: “You deal with that by defining [harm] more specifically, surely". 

That would indeed be an improvement. But what interest would a government intent on creating a powerful regulator, not restricted to a static list of in-scope content and behaviour, have in cramping the regulator’s style with strict rules and carefully limited definitions of harm? In this scheme of things breadth and vagueness are not faults but a means to an end.

There is a precedent for this kind of approach in broadcast regulation. The Communications Act 2003 refers to 'offensive and harmful', makes no attempt to define them and leaves it to Ofcom to decide what they mean. Ofcom is charged with achieving the objective: 
“that generally accepted standards are applied to the contents of television and radio services so as to provide adequate protection for members of the public from the inclusion in such services of offensive and harmful material”.
William Perrin and Professor Lorna Woods, whose work on duties of care has influenced the White Paper, say of the 2003 Act that: 
"competent regulators have had little difficulty in working out what harm means" [37]. 
They endorse Baroness Grender’s contribution to a House of Lords debate in November 2018, in which she asked: 
"Why did we understand what we meant by "harm" in 2003 but appear to ask what it is today?"
The answer is that in 2003 the legislators did not have to understand what the vague term 'harm' meant because they gave Ofcom the power to decide. It is no surprise if Ofcom has had little difficulty, since it is in reality not 'working out what harm means' but deciding on its own meanings. It is, in effect, performing a delegated legislative function.

Ofweb would be in the same position, effectively exercising a delegated power to decide what is and is not harmful.

Broadcast regulation is an exception from the norm that speech is governed only by the general law. Because of its origins in spectrum scarcity and the perceived power of the medium, it has been considered acceptable to impose stricter content rules and a discretionary style of regulation on broadcast, in addition to the general laws (defamation, obscenity and so on) that apply to all speech.

That does not, however, mean that a similar approach is appropriate for individual speech. Vagueness goes hand in hand with arbitrary exercise of power. If this government had set out to build a scaffold from which to hang individual online speech, it could hardly have done better.

The duty of care that isn’t

Lastly, it is notable that as far as can be discerned from the White Paper the proposed duty of care is not really a duty of care at all.

A duty of care properly so called is a legal duty owed to identifiable persons. They can claim damages if they suffer injury caused by a breach of the duty. Common law negligence and liability under the Occupiers’ Liability Act 1957 are examples. These are typically limited to personal injury and damage to physical property; and only rarely impose a duty on, say, an occupier, to prevent visitors injuring each other. An occupier owes no duty in respect of what visitors to the property say to each other.

The absence in the White Paper of any nexus between the duty of care and individual persons would allow Ofweb’s remit to be extended beyond injury to individuals and into the nebulous realm of harms to society. That, as discussed above, is what the White Paper proposes.

Occasionally a statute creates something that it calls a duty of care, but which in reality describes a duty owed to no-one in particular, breach of which is (for instance) a criminal offence.

An example is s.34 of the Environmental Protection Act 1990, which creates a statutory duty in respect of waste disposal. As would be expected of such a statute, s.34 is precise about the conduct that is in scope of the duty. In contrast, the White Paper proposes what is in effect a universal online ‘Behaving Badly’ law.

Even though the Secretary of State referred in a recent letter to the Society of Editors to “A duty of care between companies and their users”, the ‘duty of care’ described in the White Paper is something quite different from a duty of care properly so called.

The White Paper’s duty of care is a label applied to a regulatory framework that would give Ofweb discretion to decide what user communications and activities on the internet should be deemed harmful, and the power to enlist proxies such as social media companies to sniff and snuff them out, and to take action against an in scope company if it does not comply.

This is a mechanism for control of individual speech such as would not be contemplated offline and is fundamentally unsuited to what individuals do and say online.


Saturday, 16 March 2019

A Ten Point Rule of Law Test for a Social Media Duty of Care

All the signs are that the government will shortly propose a duty of care on social media platforms aimed at reducing the risk of harm to users.
DCMS Secretary of State Jeremy Wright wrote recently:
"A world in which harms offline are controlled but the same harms online aren’t is not sustainable now…". 
The House of Lords Communications Committee invoked a similar 'parity principle':
"The same level of protection must be provided online as offline."
Notwithstanding that the duty of care concept is framed as a transposition of offline duties of care to online, proposals for a social media duty of care will almost certainly go significantly beyond any comparable offline duty of care.

When we examine safety-related duties of care owed by operators of offline public spaces to their visitors, we find that they:
(a) are restricted to objectively ascertainable injury,
(b) rarely impose liability for what visitors do to each other,
(c) do not impose liability for what visitors say to each other. 

The social media duties of care that have been publicly discussed so far breach all three of these barriers. They relate to subjective harms and are about what users do and say to each other. Nor are they restricted to activities that are unlawful as between the users themselves.

The substantive merits and demerits of any proposed social media duty of care will no doubt be hotly debated. But the likely scope of a duty of care raises a prior rule of law issue. The more broadly a duty of care is framed, the greater the risk that it will stray into impermissible vagueness.

The rule of law objection to vagueness was spelt out by the House of Lords in R v Rimmington, citing the US case of Grayned:
"Vagueness offends several important values … A vague law impermissibly delegates basic policy matters to policemen, judges and juries for resolution on an ad hoc and subjective basis, with the attendant dangers of arbitrary and discriminatory application."  

Whilst most often applied to criminal liability, the objection to vagueness is more fundamental than that. It is a constitutional principle that applies to the law generally. Lord Diplock referred to it in a 1975 civil case (Black-Clawson):
"The acceptance of the rule of law as a constitutional principle requires that a citizen, before committing himself to any course of action, should be able to know in advance what are the legal consequences that will flow from it."

Certainty is a particular concern with a law that has consequences for individuals' speech. In the context of a social media duty of care the rule of law requires that users must be able to know with reasonable certainty in advance what of their speech is liable to be the subject of preventive or mitigating action by a platform operator subject to the duty of care.

With all this in mind, I propose a ten point rule of law test by which the government’s proposals, when they appear, may be evaluated. These tests are not about the merits or demerits of the content of any proposed duty of care as such, although of course how the scope and substance of any duty of care is defined will be central to the core rule of law questions of certainty and precision.

These tests are in the nature of a precondition: is the duty of care framed with sufficient certainty and precision to be acceptable as law, particularly bearing in mind potential consequences for individual speech?

It is, for instance, possible for scope to be both broad and clear. That would pass the rule of law test, but might still be objectionable on its merits. But if the scope does not surmount the rule of law threshold of certainty and precision it ought to fall at that first hurdle.

My proposed tests are whether there is sufficient certainty and precision as to:

1.    Which operators are and are not subject to the duty of care.
2.      To whom the duty of care is owed.
3.      What kinds of effect on a recipient will and will not be regarded as harmful.
4.      What speech or conduct by a user will and will not be taken to cause such harm.
5.      If risk to a hypothetical recipient of the speech or conduct in question is sufficient, how much risk suffices and what are the assumed characteristics of the notional recipient.
6.      Whether the risk of any particular harm has to be causally connected (and if so how closely) to the presence of some particular feature of the platform.
7.      What circumstances would trigger an operator's duty to take preventive or mitigating steps.
8.      What steps the duty of care would require the operator to take to prevent or mitigate harm (or a perceived risk of harm).
9.      How any steps required by the duty of care would affect users who would not be harmed by the speech or conduct in question.
10.   Whether a risk of collateral damage to lawful speech or conduct (and if so how great a risk of how extensive damage), would negate the duty of care.

These tests are framed in terms of harms to individuals. Some may object that ‘harm’ should be viewed collectively. From a rule of law perspective it should hardly need saying that constructs such as (for example) harm to society or harm to culture are hopelessly vague.

One likely riposte to objections of vagueness is that a regulator will be empowered to decide on the detailed rules. Indeed it will no doubt be argued that flexibility on the part of a regulator, given a set of high level principles to work with, is beneficial. There are at least two objections to that.

First, the regulator is not an alchemist. It may be able to produce ad hoc and subjective applications of vague precepts, and even to frame them as rules, but the moving hand of the regulator cannot transmute base metal into gold. Its very raison d'etre is flexibility, discretionary power and nimbleness. Those are a vice, not a virtue, where the rule of law is concerned, particularly when freedom of individual speech is at stake.

Second, if the vice of vagueness is potential for arbitrariness, then it is unclear how Parliament delegating policy matters to an independent regulator is any more acceptable than delegating them to a policeman, judge or jury. It compounds, rather than cures, the vice.

Close scrutiny of any proposed social media duty of care from a rule of law perspective can help ensure that we make good law for bad people rather than bad law for good people.



Saturday, 22 December 2018

Internet legal developments to look out for in 2019

A bumper crop of pending litigation and legislative initiatives for the coming year (without even thinking about Brexit).

EU copyright reform

-         The proposed Directive on Copyright in the Digital Single Market is currently embroiled in trialogue discussions between Commission, Council and Parliament. It continues to excite controversy over the publishers’ ancillary right and the clash between Article 13 and the ECommerce Directive's intermediary liability provisions.
-         Political agreement was reached on 13 December 2018 to a Regulation extending the country of origin provisions of the Satellite and Cable Broadcasting Directive to online radio and news broadcasts. Formal approval of a definitive text should follow in due course.
EU online business The European Commission has proposed a Regulation on promoting fairness and transparency for business users of online intermediation services. It would lay down transparency and redress rules for the benefit of business users of online intermediation services and of corporate website users of online search engines. The legislation would cover online marketplaces, online software application stores, online social media and search engines. The Council of the EU reached a common position on the draft Regulation on 29 November 2018.
Telecoms privacy The proposed EU ePrivacy Regulation continues to make a choppy voyage through the EU legislative process.
Intermediary liability The UK government has published its Internet Safety Strategy Green Paper, the precursor to a White Paper to be published in winter 2018-2019 which will include intermediary liability, duties and responsibilities. In parallel the House of Lords Communications Committee is conducting an inquiry on internet regulation, including intermediary liability. A House of Commons Committee examining Disinformation and Fake News has also touched on the topic. Before that the UK Committee on Standards in Public Life suggested that Brexit presents an opportunity to depart from the intermediary liability protections of the ECommerce Directive.
On 12 September 2018 the European Commission published a Proposal for a Regulation on preventing the dissemination of terrorist content online. This followed its September 2017 Communication on Tackling Illegal Content Online and March 2018 Recommendation on Measures to Effectively Tackle Illegal Content Online. It is notable for one hour takedown response times and the ability for Member States to derogate from the ECommerce Directive Article 15 prohibition on imposing general monitoring obligations on conduits, caches and hosts.
The Austrian Supreme Court has referred to the CJEU questions on whether a hosting intermediary can be required to prevent access to similar content and on extraterritoriality (C-18/18 - Glawischnig-Piesczek). The German Federal Supreme Court has referred two cases (YouTube and Uploaded) to the CJEU asking questions about (among other things) the applicability of the ECommerce Directive intermediary protections to UGC sharing sites.
Pending CJEU copyright cases Several copyright references are pending in the EU Court of Justice. Issues under consideration include whether the EU Charter of Fundamental Rights can be relied upon to justify exceptions or limitations beyond those in the Copyright Directive (Spiegel Online GmbH v Volker Beck, C-516/17;  Funke Medien (Case C-469/17) (Advocate General Opinion 25 October 2018 here) and Pelham Case 476/17) (Advocate General Opinion 12 December 2018 here); and whether a link to a PDF amounts to publication for the purposes of the quotation exception (Spiegel Online GmbH v Volker Beck, C-516/17). The Dutch Tom Kabinet case on secondhand e-book trading has been referred to the CJEU (Case C-263/18). The YouTube and Uploaded cases pending from the German Federal Supreme Court include questions around the communication to the public right.
Online pornography The Digital Economy Act 2017 grants powers to a regulator (subsequently designated to be the British Board of Film Classification) to determine age control mechanisms for internet sites that make ‘R18’ pornography available; and to direct ISPs to block such sites that either do not comply with age verification or contain material that would not be granted an R18 certificate. The process of putting in place the administrative arrangements is continuing.
Cross-border liability and jurisdiction The French CNIL/Google case on search engine de-indexing has raised significant issues on extraterritoriality, including whether Google can be required to de-index on a global basis. The Conseil d'Etat has referred various questions about this to the CJEU [Case C-507/17; Advocate General Opinion delivered 10 January 2019]. C-18/18 Glawischnig-Piesczek, a reference from the Austrian Supreme Court, also raises territoriality questions in the context of Article 15 of the ECommerce Directive.
In the law enforcement field the EU has proposed a Regulation on EU Production and Preservation Orders (the ‘e-Evidence Regulation’) and associated Directive that would set up a regime for some cross-border requests direct to service providers. The UK has said that it will not opt in the Regulation. US-UK bilateral negotiations on direct cross-border access to data are continuing'. The Crime (Overseas Production Orders) Bill, which would put in place a mechanism enabling UK authorities to make cross-border requests under such a bilateral agreement is progressing through Parliament. [Meanwhile discussions continue on a Second Protocol to the Cybercrime Convention, on evidence in the cloud]
Online state surveillance The UK’s Investigatory Powers Act 2016 (IP Act), has come almost completely into force, including amendments following the Watson/Tele2 decision of the CJEU. However the arrangements for a new Office for Communications Data Authorisation to approve requests for communications data have yet to be put in place.
Meanwhile a pending reference to the CJEU from the Investigatory Powers Tribunal raises questions as to whether the Watson decision applies to national security, and if so how; whether mandatorily retained data have to be held within the EU; and whether those whose data have been accessed have to be notified.
Liberty has a pending judicial review of the IP Act bulk powers and data retention powers. It has been granted permission to appeal to the Court of Appeal on the question whether the data retention powers constitute illegitimate generalised and indiscriminate retention.
The IP Act (in particular the bulk powers provisions) may be indirectly affected by cases in the CJEU (challenges to the EU-US PrivacyShield and to the Belgian communications data retention regime), in the European Court of Human Rights (in which Big Brother Watch and various other NGOs challenge the existing RIPA bulk interception regime) and by an attempted judicial review by Privacy International of an Investigatory Powers Tribunal decision on equipment interference powers (the Supreme Court is considering whether RIPA ousted the possibility of judicial review).
The ECtHR gave a Chamber judgment in the BBW case on 13 September 2018. If the judgment becomes final it could affect the IP Act in as many as three separate ways. The NGOs have lodged an application for the judgment to be referred to the ECtHR Grand Chamber, as have the applicants in the Swedish Rattvisa case, in which judgment was given on 19 June 2018.
In the Privacy International equipment interference case, the Court of Appeal has held that the Investigatory Powers Tribunal decision is not susceptible of judicial review.  A further appeal has been heard by the Supreme Court. Judgment is awaited.
Compliance of the UK’s surveillance laws with EU Charter fundamental rights will be a factor in any data protection adequacy decision that is sought once the UK becomes a non-EU third country post-Brexit.

[Software - goods or services? Pending appeal to UK Supreme Court as to whether software supplied electronically as a download and not on any tangible medium is goods for the purposes of the Commercial Agents Regulations. Computer Associates (UK) Ltd v The Software Incubator Ltd Hearing  28 March 2019.]


[Updated 28 Dec 2018 to add due date of AG Opinion in Google v CNIL, 2 January 2019 to add the CJEU reference on the Belgian communications data retention regime and the pending Supreme Court decision on ouster; 4 Jan 2019 to add the AG Opinion in Pelham; 14 Jan 2019 to add Rattvisa application to refer to ECtHR Grand Chamber; 15 Jan 2019 to add AG Opinion in Google v CNIL and Computer Associates v Software Incubator appeal; 16 Jan 2019 to add Cybercrime Convention.] 


Tuesday, 30 October 2018

What will be in Investigatory Powers Act Version 1.2?


Never trust version 1.0 of any software. Wait until the bugs have been ironed out, only then open your wallet.

The same is becoming true of the UK’s surveillance legislation.  No sooner was the ink dry on the Investigatory Powers Act 2016 (IP Act) than the first bugs, located in the communications data retention module, were exposed by the EU Court of Justice (CJEU)’s judgment in Tele2/Watson

After considerable delay in issuing required fixes, Version 1.1 is currently making its way through Parliament. The pending amendments to the Act make two main changes. They restrict to serious crime the crime-related purposes for which the authorities may demand access to mandatorily retained data, and they introduce prior independent authorisation for non-national security demands.

It remains uncertain whether more changes to the data retention regime will be required in order to comply with the Tele2/Watson judgment.  That should become clearer after the outcome of Liberty’s appeal to the Court of Appeal in its judicial review of the Act and various pending references to the CJEU.

Meanwhile the recent Strasbourg judgment in Big Brother Watch v UK (yet to be made final, pending possible referral to the Grand Chamber) has exposed a separate set of flaws in the IP Act’s predecessor legislation, the Regulation of Investigatory Powers Act 2000 (RIPA). These were in the bulk interception and communications data acquisition modules. To the extent that the flaws have been carried through into the new legislation, fixing them may require the IP Act to be patched with a new Version 1.2.

The BBW judgment does not read directly on to the IP Act. The new legislation is much more detailed than RIPA and introduces the significant improvement that warrants have to be approved by an independent Judicial Commissioner.  Nevertheless, the BBW judgment contains significant implications for the IP Act. 

The Court found that three specific aspects of RIPA violated the European Convention on Human Rights:
  • Lack of robust end to end oversight of bulk interception acquisition, selection and searching processes
  • Lack of controls on use of communications data acquired from bulk interception
  • Insufficient safeguards on access to journalistically privileged material, under both the bulk interception regime and the ordinary communications data acquisition regime

End to end oversight

The bulk interception process starts with selection of the bearers (cables or channels within cables) that will be tapped.  It culminates in various data stores that can be queried by analysts or used as raw material for computer analytics. In between are automated processes for filtering, selecting and analysing the material acquired from the bearers. Some of these processes operate in real time or near real time, others are applied to stored material and take longer. Computerised processes will evolve as available technology develops.

The Court was concerned about lack of robust oversight under RIPA throughout all the stages, but especially selection and search criteria used for filtering. Post factum audit by the Interception of Communications Commissioner was judged insufficient.

For its understanding of the processes the Court relied upon a combination of sources: the Interception Code of Practice under RIPA, the Intelligence and Security Committee Report of March 2015, the Investigatory Powers Tribunal judgment of 5 December 2014 in proceedings brought by Liberty and others, and the Government’s submissions in the Strasbourg proceedings. The Court described the processes thus:

“…there are four distinct stages to the section 8(4) regime:

1.  The interception of a small percentage of Internet bearers, selected as being those most likely to carry external communications of intelligence value.
2.  The filtering and automatic discarding (in near real-time) of a significant percentage of intercepted communications, being the traffic least likely to be of intelligence value.
3.  The application of simple and complex search criteria (by computer) to the remaining communications, with those that match the relevant selectors being retained and those that do not being discarded.
4.  The examination of some (if not all) of the retained material by an analyst).”

The reference to a ‘small percentage’ of internet bearers derives from the March 2015 ISC Report. Earlier in the judgment the Court said:

“… GCHQ’s bulk interception systems operated on a very small percentage of the bearers that made up the Internet and the ISC was satisfied that GCHQ applied levels of filtering and selection such that only a certain amount of the material on those bearers was collected.”

Two points about this passage are worthy of comment. First, while the selected bearers may make up a very small percentage of the estimated 100,000 bearers that make up the global internet (judgment, [9]), that is not same thing as the percentage of bearers that land in the UK.

Second, the ISC report is unclear about how far, if at all, filtering and selection processes are applied not just to content but also to communications data (metadata) extracted from intercepted material. Whilst the report describes filtering, automated searches on communications using complex criteria and analysts performing additional bespoke searches, it also says:

Related CD (RCD) from interception: GCHQ’s principal source of CD is as a by-product of their interception activities, i.e. when GCHQ intercept a bearer, they extract all CD from that bearer. This is known as ‘Related CD’. GCHQ extract all the RCD from all the bearers they access through their bulk interception capabilities.” (emphasis added)

The impression that collection of related communications data may not be filtered is reinforced by the Snowden documents, which referred to several databases derived from bulk interception and which contained very large volumes of non-content events data. The prototype KARMA POLICE, a dataset focused on website browsing histories, was said to comprise 17.8 billion rows of data, representing 3 months’ collection. (The existence or otherwise of KARMA POLICE and similar databases has not been officially acknowledged, although the then Interception of Communications Commissioner in his 2014 Annual Report reported that he had made recommendations to interception agencies about retention periods for related communications data.)

The ISC was also “surprised to discover that the primary value to GCHQ of bulk interception was not in reading the actual content of communications, but in the information associated with those communications.”

If it is right that little or no filtering is applied to collection of related communications data (or secondary data as it is known in the IP Act), then the overall end to end process would look something like this (the diagram draws on Snowden documents published by The Intercept as well as the sources already mentioned):


Returning to the BBW judgment, the Court’s concerns related to intercepted ‘communications’ and ‘material’:

“the lack of oversight of the entire selection process, including the selection of bearers for interception, the selectors and search criteria for filtering intercepted communications, and the selection of material for examination by an analyst…”

There is no obvious reason to limit those observations to content. Elsewhere in the judgment the Court was “not persuaded that the acquisition of related communications data is necessarily less intrusive than the acquisition of content” and went on:

“The related communications data … could reveal the identities and geographic location of the sender and recipient and the equipment through which the communication was transmitted. In bulk, the degree of intrusion is magnified, since the patterns that will emerge could be capable of painting an intimate picture of a person through the mapping of social networks, location tracking, Internet browsing tracking, mapping of communication patterns, and insight into who a person interacted with…”.

The Court went on to make specific criticisms of RIPA’s lack of restrictions on the use of related communications data, as discussed below.

What does the Court’s finding on end to end oversight mean for the IP Act? The Act introduces independent approval of warrants by Judicial Commissioners, but does it create the robust oversight of the end to end process, particularly of selectors and search criteria, that the Strasbourg Court requires?

The March 2015 ISC Report recommended that the oversight body be given express authority to review the selection of bearers, the application of simple selectors and initial search criteria, and the complex searches which determine which communications are read. David Anderson Q.C.'s (now Lord Anderson) Bulk Powers Review records (para 2.26(g)) an assurance given by the Home Office that that authority is inherent in clauses 205 and 211 of the Bill (now sections 229 and 235 of the IP Act).

Beyond that, under the IP Act the Judicial Commissioners have to consider at the warrant approval stage the necessity and proportionality of conduct authorised by a bulk warrant. Arguably that includes all four stages identified by the Strasbourg Court (see my submission to IPCO earlier this year). If that is right, the RIPA gap may have been partially filled.

However, the IP Act does not specify in terms that selectors and search criteria have to be reviewed. Moreover, focusing on those particular techniques already seems faintly old-fashioned. The Bulk Powers Review reveals the extent to which more sophisticated analytical techniques such as anomaly detection and pattern analysis are brought to bear on intercepted material, particularly communications data. Robust end to end oversight ought to cover these techniques as well as use of selectors and automated queries.  

The remainder of the gap could perhaps be filled by an explanation of how closely the Judicial Commissioners oversee the various selection, searching and other analytical processes.

Filling this gap may not necessarily require amendment of the IP Act, although it would be preferable if it were set out in black and white. It could perhaps be filled by an IPCO advisory notice: first as to its understanding of the relevant requirements of the Act; and second explaining how that translates into practical oversight, as part of bulk warrant approval or otherwise, of the end to end stages involved in bulk interception (and indeed the other bulk powers).

Related Communications Data/Secondary Data

The diagram above shows how communications data can be obtained from bulk interception. Under RIPA this was known as Related Communications Data. In the IP Act it is known as Secondary Data. Unlike RIPA, the IP Act specifies a category of bulk warrant that extracts secondary data alone (without content) from bearers.  However, the IP Act definition of secondary data also permits some items of content to be extracted from communications and treated as communications data.

Like RIPA, the IP Act contains few specific restrictions on the use to which secondary data can be put. It may be examined for a reason falling within the overall statutory purposes and subject to necessity and proportionality. The IP Act adds the requirement that the reason be within the operational purposes (which can be broad) specified in the bulk warrant. As with RIPA, the restriction that the purpose of the bulk interception must be overseas-related does not apply at the examination stage. Like RIPA, there is a requirement to obtain specific authority (a targeted examination warrant, in the case of the IP Act) to select for examination the communications of someone known to be within the British Islands. But like RIPA this applies only to content, not to secondary data.

RIPA’s lack of restriction on examining related communications data was challenged in the Investigatory Powers Tribunal. The government argued (and did so again in the Strasbourg proceedings) that this was necessary in order to be able to determine whether a target was within the British Islands, and hence whether it was necessary to apply for specific authority from the Secretary of State to examine the content of the target’s communications.

The IPT accepted this argument, holding that the difference in the restrictions was justified and proportionate by virtue of the need to be able to determine whether a target was within the British Islands. It rejected as “an impossibly complicated or convoluted course” the suggestion that RIPA could have provided a specific exception to provide for the use of metadata for that purpose.

That, however, left open the question of all the other uses to which metadata could be put. If the Snowden documents referred to above are any guide, those uses are manifold.  Bulk intercepted metadata would hardly be of primary value to GCHQ, as described by the ISC, if its use were restricted to ascertaining whether a target was within or outside the British Islands.

The Strasbourg Court identified this gap in RIPA and held that the absence of restrictions on examining related communications data was a ground on which RIPA violated the ECHR.

The Court accepted that related communications data should be capable of being used in order to ascertain whether a target was within or outside the British Islands. It also accepted that that should not be the only use to which it could be put, since that would impose a stricter regime than for content.

But it found that there should nevertheless be “sufficient safeguards in place to ensure that the exemption of related communications data from the requirements of section 16 of RIPA is limited to the extent necessary to determine whether an individual is, for the time being, in the British Islands.”

Transposed to the IP Act, this could require a structure for selecting secondary data for examination along the following lines:
  • Selection permitted in order to determine whether an individual is, for the time being, in the British Islands.
  • Targeted examination warrant required if (a) any criteria used for the selection of the secondary data for examination are referable to an individual known to be in the British Islands, and (b) the purpose of using those criteria is to identify secondary data or content relating to communications sent by, or intended for, that individual.
  • Otherwise: selection of secondary data permitted (but subject to the robust end to end oversight requirements discussed above).

Although the Court speaks only of sufficient safeguards, it is difficult to see how this could be implemented without amendment of the IP Act.

Journalistic privilege

The Court found RIPA lacking in two areas: bulk interception (for both content and related communications data) and ordinary communications data acquisition. The task of determining to what extent the IP Act remedies the deficiencies is complex. However, in the light of the comparisons below it seems likely that at least some amendments to the legislation will be necessary.

Bulk interception
For bulk interception, the Court was particularly concerned that there were no requirements either:
  • circumscribing the intelligence services’ power to search for confidential journalistic or other material (for example, by using a journalist’s email address as a selector),
  • requiring analysts, in selecting material for examination, to give any particular consideration to whether such material is or may be involved.

Consequently, the Court said, it would appear that analysts could search and examine without restriction both the content and the related communications data of those intercepted communications.

For targeted examination warrants the IP Act itself contain some safeguards relating to retention and disclosure of material where the purpose, or one of the purposes, of the warrant is to authorise the selection for examination of journalistic material which the intercepting authority believes is confidential journalistic material. Similar provisions apply if the purpose, or one of the purposes, of the warrant is to identify or confirm a source of journalistic information.

Where a targeted examination warrant is unnecessary the Interception Code of Practice provides for corresponding authorisations and safeguards by a senior official outside the intercepting agency.

Where a communication intercepted under a bulk warrant is retained following examination and it contains confidential journalistic material, the Investigatory Powers Commissioner must be informed as soon as reasonably practicable.

Unlike RIPA, S.2 of the IP Act contains a general provision requiring public authorities to have regard to the particular sensitivity of any information, including confidential journalistic material and the identity of a journalist’s source.

Whilst these provisions are an improvement on RIPA, it will be open to debate whether they are sufficient, particularly since the specific safeguards relate to arrangements for handling, retention, use and destruction of the communications rather than to search and selection.

Bulk communications data acquisition
The IP Act introduces a new bulk communications data acquisition warrant to replace S.94 of the Telecommunications Act 1994. S.94 was not considered in the BBW case.  The IP Act bulk power contains no provisions specifically protecting journalistic privilege. The Code of Practice expands on the general provisions in S.2 of the Act. 

Ordinary communications data acquisition
The RIPA Code of Practice required an application to a judge under PACE 1984 where the purpose of the application was to determine a source. The Strasbourg court criticised this on the basis that it did not apply in every case where there was a request for the communications data of a journalist, or where such collateral intrusion was likely.

The IP Act contains a specific provision requiring a public authority to seek the approval of the Investigatory Powers Commissioner to obtain communications data for the purpose of identifying or confirming a source of journalistic information. This provision appears to suffer the same narrowness of scope criticised by the Strasbourg Court.