Tuesday, 6 April 2021

Seriously annoying tweets

The row over Section 59 of the Police, Crime, Sentencing and Courts Bill is reminiscent of a backwater pond that has lain undisturbed for years. Then someone decides to poke a stick in it and all manner of noxious fumes are released.

In this instance the pond is the common law offence of public nuisance. The stick that has disturbed it is the government’s proposal to replace the common law offence with a statutory codification. The noxious fume that has been released is the risk of criminalising legitimate public protest.

Section 59 would replace the common law public nuisance offence with a statutory equivalent. The new offence would consist of intentionally or recklessly causing serious harm, or a risk of serious harm, to the public or a section of the public. Such harm would include “serious distress, serious annoyance, serious inconvenience or serious loss of amenity”. There would be a defence of reasonable excuse. “Serious annoyance”, in particular, has been criticised as overly broad.

Section 59 is situated in the Public Order part of the Bill: a collection of provisions about policing public demonstrations. But Section 59 is not limited to behaviour in the street. In impeccably technology-neutral fashion it would apply to any "act". Posting a tweet is as much an "act" as gluing oneself to the road.

Criticism of Section 59 has focused on its potential for affecting street protests. Little attention has been paid to 
online communications. How would “serious annoyance” translate from street to tweet? Is a seriously annoying tweet the same kind of thing as a seriously annoying street protest? Is the potential impact of the Section 59 offence greater, less or no different in the online rather than the physical environment? Spoiler alert: it is at least the same and probably greater. How much greater we can only guess at – reason enough to send Section 59 back to the drawing board.

Origin of Section 59

The official response to concerns about Section 59 is that there is nothing to see here: it merely implements the Law Commission’s 2015 recommendations for codification of the common law public nuisance offence. The Secretary of State for Justice during the Second Reading of the Bill described concerns about “annoyance” as a “canard” (see further below).

It does appear that the Law Commission’s recommendations excited no public controversy at the time. Its consultation paper attracted a total of 10 responses on the public nuisance offence, none of which opposed its overall proposals.

However, for at least two reasons things are not that simple. First, the Law Commission did not discuss how the offence might apply to public online communications. (For that matter it barely touched on real world protest, even though the common law offence had been deployed against "sit-down" demonstrations in the 1960s.)  Second, in recommending “serious annoyance” as a criterion it reformulated the common law offence in terms that, although intended to keep the statutory offence within clear bounds, may have the opposite result when applied to
online speech. It is hard to avoid the impression that the question of what “serious annoyance” might mean when transposed from street to tweet was not on the Law Commission’s radar. 

Before delving into those issues, some context is helpful. 

The Law Commission’s June 2015 report was the first product of a larger project to simplify the criminal law. The report recommended that the common law public nuisance offence should be replaced with a statutory codification. The statutory offence would differ in some respects from the common law offence. The mental element would be set at intention or recklessness rather than negligence. The statutory offence would have to prescribe a maximum penalty. The Law Commission made no recommendation as to that, other than to observe that the maximum sentence should reflect that the offence was intended to address serious cases for which other offences were not adequate. The Policing Bill proposes a maximum custodial sentence of 10 years.

The Law Commission’s proposals sat on the shelf for the best part of six years. Why the government has chosen this moment to blow the dust off them and poke a stick in the pond is a matter of speculation. Whatever the reason, the government has done it and now people are reading the wording of Section 59. They see “serious annoyance” and question how that wording would apply to street demonstrations. Equally, we can ask how it would apply to online behaviour.

The Law Commission did not consider online communications

Neither the 2015 Law Commission Report nor its preceding consultation paper addressed how the codified offence, including the “serious annoyance” language, might apply to public online communications such as social media posts. The common law offence was certainly capable of doing so, as the Law Commission later acknowledged in its 2018 Scoping Report on Abusive and Offensive Online Communications.

This extract from the 2015 Report illustrates how far removed the Law Commission’s focus was from online speech:
“In our view, its proper use is to protect the rights of members of the public to enjoy public spaces and use public rights (such as rights of way) without danger, interference or annoyance.”
For whatever reason, the 2015 Report paid little attention to the possible effect of the public nuisance offence on freedom of expression, whether offline or online. It did note that its proposed reasonableness defence “would include cases where the defendant’s conduct is in exercise of a right under Article 10 (freedom of expression) or 11 (freedom of assembly and association) of the European Convention on Human Rights.”

But it added in a footnote: “It is somewhat difficult to imagine examples in which this point arises in connection with public nuisance.” This comment is not easy to understand in the online context, where any application of the offence to an online post is likely to engage Article 10. Use of the common law offence against sit-down demonstrations in the 1960s also seems pertinent.

Over-vigorous application of a statutory offence might be greeted in similar terms to those employed by the Lord Chief Justice in the Twitter Joke Trial case (Chambers v DPP), an appeal from conviction under s.127 of the Communications Act 2003:
“The 2003 Act did not create some newly minted interference with the first of President Roosevelt's essential freedoms – freedom of speech and expression. Satirical, or iconoclastic, or rude comment, the expression of unpopular or unfashionable opinion about serious or trivial matters, banter or humour, even if distasteful to some or painful to those subjected to it should and no doubt will continue at their customary level, quite undiminished by this legislation.”
But when we are considering conversion of public nuisance into a statutory offence, is it enough to hope that what on the face of it looks like overly broad language (with concomitant chilling effects on speech) would be rescued by the ECHR?

The Law Commission’s reformulation 

The common law offence, as endorsed in 2005 in the leading House of Lords case of Rimmington, is articulated in terms of "endangering the comfort of the public". The Law Commission described that terminology as "somewhat archaic", "wide and vague" in everyday language, which "could include very trivial reasons for displeasure". It proposed instead: "serious distress, annoyance, inconvenience or loss of amenity". In Section 59 this is rendered as “serious distress, serious annoyance, serious inconvenience or serious loss of amenity”.

The Law Commission evidently considered that by recommending a change in language from "endangering the comfort of the public" to "serious annoyance" it was narrowing the potential scope of the offence. It certainly intended to exclude the possibility of catching trivial displeasure.

Yet, when applied to pure speech, the reformulation seems less constraining than the original. "Comfort" could be taken to connote a physical or sensory element that is not a requirement for "annoyance": consider the disruptive effect on the public of a hoax bomb threat, compared with public reaction to the contents of an offensive tweet. 

Back to Blackstone?

If "annoyance" is a well understood term in relation to the common law offence, might that provide a basis on which to interpret Section 59 narrowly? 
Blackstone referred to "nuisances that are an annoyance to all the King’s subjects". 

In the 1700s public nuisance concerned environmental and public health misdeeds such as "noisome and offensive stinks and smells", polluting the Thames, or taking a child infected with smallpox through a public street. 

Whilst those readily fit the description of annoyances, how would that read across to speech? Can we even conceptualise a foul-smelling tweet? A noxious vapour and an obnoxious tweet are categorically different, one impinging on the senses and the other on the mind. Yet under Section 59 the courts would be asked to apply the same statutory language to both. The one context does not provide a guide to the other.

As the Law Commission observed in its 2015 Report, the common law offence has expanded from those roots to cover such diverse behaviour as plotting to switch off the lights at a football match, threatening suicide by jumping from bridges, hosting acid house parties, hanging from bridges, jumping into a river during a boat race, sniffing glue in public, lighting flares or fireworks at football matches, or recording videos threatening bombings – what it called "general public misbehaviour".

As the common law offence has developed since Blackstone to cover a greater variety of misbehaviour, correspondingly greater caution has to be exercised over the language used to characterise the elements of the offence.

Did the Law Commission mean to include online communications?

If the Law Commission did not in its 2015 Report specifically consider the impact of its recommendations on online communications, might that be because the statutory offence was not intended to apply to them?

As a largely technical exercise in codification, a proposed statutory offence would be expected to mirror the scope of the common law offence unless explicitly stated otherwise.

As to the common law offence, Lord Nicholls in Rimmington posed the example of a hoax message of the existence of a public danger, such as a bomb in a railway station, communicated by telephone. That, he said, even if communicated to one person alone, would be a public nuisance because it was intended to be passed on to users of the railway station. If a message communicated in that way can be a public nuisance, then all the more so a tweet published directly to the world.

If there were any doubt about that, the Law Commission acknowledged in its 2018 Scoping Report on Abusive and Offensive Online Communications that the common law offence is already capable of applying to public social media posts. The Law Commission identified overlap with, for instance, existing statutory harassment and communications offences.

The 2015 Law Commission Report discussed the Rimmington judgment in detail. It did not suggest that misbehaviour covered by its proposed statutory offence should exclude electronic communications. The technology-neutral approach of Section 59 is no accident, even if the consequences for social media and internet communications were not discussed.

Would Section 59 be used against online behaviour?

The 2018 Law Commission Scoping Report observed: “Given the wide array of statutory offences covering online harassment, it is difficult to see public nuisance being justifiably used in favour of these other offences in cases of online harassment and stalking.”

It noted that the common law offence was “very broad in scope”. While acknowledging that the public nuisance offence could cover online behaviour, the Law Commission said that it was not aware of any prosecution. Nor does 
the Crown Prosecution Service social media prosecution guidance mention public nuisance. 

None of that, however, means that the same would hold true once the public nuisance offence is given statutory force. 

The Law Commission’s observation about justifiability of prosecution of the common law offence rests on the primacy given to statutory offences. As the Law Commission explained in its 2015 Report, there is a presumption against using a common law offence where the same territory is covered by a statutory offence. That falls away once the public nuisance offence itself becomes statutory.

More fundamentally, there
 is nothing like a statutory codification to bring a common law offence back to full life and vigour. Language embedded in a statute gains strength from the fact that it represents the explicit will of the legislature. No longer is the court incrementally developing a common law offence within the bounds of reasonable foreseeability in order to accommodate changing activities. For a statutory offence its task is to interpret specific words to which Parliament has expressly agreed. Once written down in a statute, words tend to take on a life of their own. The broader they are, the greater the potential for them to do so. 

Prosecutorial guidance

The Law Commission suggested that the effect of removing the presumption could be mitigated by development of prosecutorial guidance, which could state that the offence should not be used when a more specific offence is available except for good reasons. Prosecutorial discretion, however, is no substitute for an appropriately drawn offence. Where speech is concerned, reliance on prosecutorial discretion is apt to produce the kind of uncertainty that gives rise to a chilling effect on freedom of expression.

Even if relying on prosecutorial discretion to mitigate an over-broad offence were an acceptable way of proceeding in the past, for online speech it now has harmful consequences that do not apply offline. Why so? Because when online intermediaries (such as web hosts, discussion forums and social media platforms) are incentivised (or even, come the proposed Online Safety Bill, obliged on pain of regulatory sanctions) to remove illegal content, the test of illegality is not whether a prosecutor would decide to bring charges. It is whether the content falls within the letter of the statute. It matters more than ever before that the language of a statute should clearly and precisely catch only what it ought to catch and nothing more.

The canard of annoyance

The Secretary of State for Justice Robert Buckland, during the Bill’s Commons Second Reading, suggested that concern about the term annoyance was a “canard”. He prayed in aid the authority of Lord Bingham:
“The law had been restated with reference to the use of the word “annoyance” by none other than the late and noble Lord Bingham when he was in the House of Lords. He set out the law very clearly. Clause 59 amounts to no more than a reiteration of the excellent work of the Law Commission. To say anything else is, frankly, once again a confection, a concoction and a twisting of the reality.”
This presumably was a reference to Lord Bingham’s speech in Rimmington. Lord Bingham concluded that the common law public nuisance offence, interpreted in the way that he specified, passed the legality test:
“A legal adviser asked to give his opinion in advance would ascertain whether the act or omission contemplated was likely to inflict significant injury on a substantial section of the public exercising their ordinary rights as such: if so, an obvious risk of causing a public nuisance would be apparent; if not, not."
Did Lord Bingham intend "significant injury" to include "serious annoyance"? The critical passage in his speech is at paragraph 36:
“I would for my part accept that the offence as defined by Stephen, as defined in Archbold (save for the reference to morals), as enacted in the Commonwealth codes quoted above and as applied in the cases (other than R v Soul 70 Cr App R 295) referred to in paras 13 to 22 above is clear, precise, adequately defined and based on a discernible rational principle.”
The offence as defined by Stephen was quoted by Lord Bingham at para 10 of his speech. It does not include "annoyance". In paragraphs 9 and 10 he quoted the offence as defined in different editions of Archbold. Again there is no mention of "annoyance". He went on in paragraph 11 to examine the Commonwealth codes of Canada, Queensland and Tasmania. None of those mentions "annoyance". At paragraphs 13 to 22 of his speech he reviews numerous cases. None of the passages from judgments that he quotes mentions "annoyance".

“Annoyance” was, however, mentioned in the two authorities that Lord Bingham quoted in paragraph 8 of his speech: Hawkins Pleas of the Crown (1716) Blackstone’s Commentaries (1768). Lord Bingham omitted both of those from the critical passage quoted above, endorsing only Stephen and Archbold.

That leaves the reference in paragraph 10 of Lord Bingham’s speech to Section 268 of the Indian Penal Code of 1860. That Commonwealth provision includes the phrase "common injury, danger or annoyance". Lord Bingham commented that it seemed likely that the draftsman of that provision intended to summarise the English common law on public nuisance "as then understood". 

Whatever may have been the position in 1860, today there is every reason to doubt whether the expression “serious annoyance” captures either the common law offence as it currently applies, or the statutory offence as it ought to apply, to public online communications.

[7 April 2021. Added “Commonwealth” to penultimate paragraph.] 



Sunday, 7 February 2021

Corrosion-proofing the UK’s intermediary liability protections

The UK having now cut its direct ties with EU law, what does its future hold for the intermediary liability protections in Articles 12 to 15 of the Electronic Commerce Directive?

Until recently, the government’s policy has been taken to be as stated in its 2019 “eCommerce Directive guidance for businesses if there’s no Brexit deal”:

“Immediately following the UK’s exit from the EU in a no deal scenario, the government will minimise disruption by prioritising continuity and stability. Therefore the UK’s policy approach will continue to align with the provisions contained in the Directive, including those on liability of intermediary service providers and general monitoring.”

Consistently with that, in October 2020 the government published post-transition guidance, stating that it "has no current plans to change the UK’s intermediary liability regime or its approach to prohibition on general monitoring requirements".

Articles 12 to 14 provide limitations on the liability of conduits, caches and hosts for unlawful user information. Article 15 prohibits EU member states from imposing general monitoring obligations on those intermediaries. Whether and how long the government’s commitment to Articles 12 to 15 would survive was an open question. With nothing said in the UK-EU Trade and Co-Operation Agreement about online intermediary liability, there appeared to be nothing to prevent the government – should it wish to depart from its previous policy – from legislating in future contrary to Articles 12 to 15 - subject always to the possibility of a legal objection on fundamental rights grounds.

There was a detectable drift away from the overt commitment to Article 15 with the publication of the government’s Full Consultation Response to the Online Harms White Paper, published on 15 December 2020. The Response strayed into proposing proactive monitoring obligations that could not readily be reconciled with that policy. That drift was also evident in the simultaneously published Interim Voluntary Codes of Practice on Terrorism, and Online Child Sexual Exploitation and Abuse, which are in effect a template for obligations likely to be imposed under the future Online Safety Bill. The Full Response was silent on the apparent conflict with Article 15.

Now, the government has dropped its commitment to maintain alignment with Article 15. A new version of its post-Brexit eCommerce Directive guidance, published on 18 January 2021, says this:

“The eCommerce Directive also contains provisions relating to intermediary liability and prohibitions against imposing general monitoring obligations.

The government is committed to upholding the liability protections now that the transition period has ended. For companies that host user-generated content on their online services, there will continue to be a ‘notice and take down’ regime where the platform must remove illegal content that they become aware of or risk incurring liability.

The government also intends to introduce a new Online Safety regulatory framework. This will require companies to take action to keep their users safe, including with regard to illegal content. Details on what this will mean for companies are set out in the Online Harms White Paper: Full government response to the consultation, and the government plans to introduce legislation to Parliament this year.”

Notably, although a commitment to preserving some kind of hosting protection remains, there is now silence on preserving the prohibition on general monitoring obligations. The significance of this omission can hardly be overstated.

Legislation that takes conscious bites out of the Directive’s protections is one thing. But there is also a more subtle threat. Active maintenance of the statute book will be needed if the liability protections to which the government appears to be committed are not to be corroded by simple neglect. The Article 12 to 14 protections for conduit, caching and hosting activities are potentially liable to erode over time as the statute book is augmented and amended.

The reason for this lies partly in the horizontal nature of the protections. They are not tailored specifically to copyright, to defamation, to obscenity, or to any of the other myriad kinds of criminal and civil liability that might be incurred online.  Articles 12 to 14 are shields that apply across the board, whatever the subject matter of the liability.

The risk of erosion lies in the way in which successive governments have gone about legislating those protections. When the ECommerce Directive was first implemented in UK law, the 2002 Regulations enacted the Art 12 to 14 liability protections across the board: they applied to all existing laws under which liability within scope of the Directive might be incurred (except for financial services, for which the protections were legislated separately).

But, crucially, the 2002 Regulations stated that they did not have prospective effect. This meant that they applied only to legislation in existence when they came into force. On every occasion thereafter that a new criminal offence or civil wrong was created, or an existing one amended, the protections required by Arts 12 to 14 had to be specifically enacted for that offence or civil wrong. Administrative Guidance on Consistency of Future Legislation issued at the time by the Department of Trade and Industry stated:

“Legislators will need to give careful consideration to the question of whether any new requirements - again, whether in primary, secondary or tertiary legislation and whether reserved or devolved- create any offences which (or the aiding or abetting of which) could possibly be committed by a mere conduit, cache or host within the meanings of Regulations 17-19. If so, they will need to ensure that they recognise these limitations on the liability of intermediary service providers. Similar considerations will apply to any form of civil liability created by any new requirements.”

In an ideal world, this would have been done within the primary legislation that created the new or amended liability. Sometimes that happened. We can, for instance, see the conduit, caching and hosting protections included in Schedule 1 of the Hate Crime and Public Order (Scotland) Bill currently making its way through the Scottish Parliament. Sometimes, however, it was overlooked and the omission had to be remedied separately. Since the protections were required by an EU Directive, the necessary provisions could be enacted pre-Brexit by secondary legislation under the European Communities Act 1972. This was done on around 15 occasions, in addition to regulations implementing the protections for the financial services sector.

The result is a veritable hodgepodge of primary and secondary legislation enacted over the best part of 20 years, implementing – not always using the same language – the intermediary protections required by Articles 12 to 14 of the Directive. At my last count, in addition to the 2002 Regulations themselves there were over 30 separate subject matter-specific implementations dotted around different primary and secondary legislation – and I may well not have found them all.

Post-Brexit, the option of plugging gaps via European Communities Act is no longer available. There will therefore be a greater premium on ensuring that, as in the Scottish Hate Crime Bill,  the protections are included in the relevant legislation itself. If active scrutiny and maintenance are neglected, and the requisite protections are omitted from future legislation that creates new offences and civil liability, there will be a slow accretion of liabilities and offences to which the Directive’s conduit, caching and hosting protections do not apply. 

If an omission has to be remedied, it would (unless some usable order-making power that I have not spotted is buried somewhere in the Brexit legislation) have to be done by further primary legislation. 

There is one potential qualification to this analysis. Could “Retained EU law” under the 2018 Withdrawal Act give the Directive itself a degree of post-Brexit prospective effect? If so, could the Directive’s liability protections be invoked against a future offence created by post-Brexit legislation which has omitted to address the liability position of conduits, hosts and caches? I do not pretend to know the answer to that, other than noting that the 2002 DTI Administrative Guidance was in no doubt that the liability provisions of the Directive had direct effect:

“If legislators fail to address such issues or fail to make proper provisions, the Directive will have direct effect in prohibiting them from imposing liability.”

However, any potential retention of direct effect would not offer any assistance in civil liability cases, since direct effect of Directives has been limited to the state, and not extended horizontally to affect rights as between private parties. For the Directive's intermediary liability protections the CJEU held as such in Papasavvas (C-291/13).   

[Amended 8 and 10 Feb 2021 to add reference to October 2020 government guidance; and 9 March 2021 to add reference to Papasavvas.]


Monday, 28 December 2020

Internet legal developments to look out for in 2021

Seven years ago I started to take an annual look at what the coming year might hold for internet law in the UK. This exercise has always, perforce, included EU law. With Brexit now fully upon us future developments in EU law will no longer form part of UK law. Nevertheless, they remain potentially influential: not least, because the 2018 EU Withdrawal Act provides that UK courts may have regard to anything relevant done by the CJEU, another EU entity or the EU after 31 December. In any case I am partial to a bit of comparative law. So this survey will continue to keep significant EU law developments on its radar.

What can we expect in 2021?

Copyright

Digital Single Market
EU Member States are due to implement the Digital Copyright Directive by 7 June 2021. This includes the so-called snippet tax (the press publishers’ right) and the Article 17 rules for online sharing service providers (OSSPs). The UK is not obliged to implement the Directive and has said that it has no plans to do so. Any future changes to the UK copyright framework will be “considered as part of the usual domestic policy process”.

The Polish government’s challenge to Article 17 (Poland v Parliament and Council, Case C-401/19) is pending. Poland argues that Article 17 makes it necessary for OSSPs, in order to avoid liability, to carry out prior automatic filtering of content uploaded online by users, and therefore to introduce preventive control mechanisms. It contends that such mechanisms undermine the essence of the right to freedom of expression and information and do not comply with the requirement that limitations imposed on that right be proportionate and necessary.

Linking and communication to the public The UK case of Warner Music/Sony Music v TuneIn is due to come before the Court of Appeal early in 2021.

Pending CJEU copyright cases Several copyright references are pending before the EU Court of Justice.

The YouTube and Uploaded cases (C-682/18 Peterson v YouTube and C-683/18 Elsevier v Cyando) referred from the German Federal Supreme Court include questions around the communication to the public right, as do C-392/19 VG Bild-Kunst v Preussischer Kulturbesitz (Germany, BGH), C-442/19 Brein v News Service Europe (Netherlands, Supreme Court) and C-597/19 Mircom v Telenet (Belgium). Advocate General Opinions have been delivered in YouTube/Cyando, VG Bildt-Kunst and Mircom.

YouTube/Cyando and Brein v News Service Europe also raise questions about copyright injunctions against intermediaries, as does C-500/19 Puls 4 TV.

Linking, search metadata and database right

C-762/19 CV-Online Latvia is a CJEU referral from Riga Regional Court concerning database right. The defendant search engine finds websites that publish job advertisements and uses hyperlinks to redirect users to the source websites, including that of the applicant. The defendant’s search results also include information - hyperlink, job, employer, geographical location of the job, and date – obtained from metatags on the applicant’s website published as Schema.org microdata. The questions for the CJEU are whether (a) the use of a hyperlink constitutes re-utilisation and (b) the use of the metatag data constitutes extraction, for the purposes of database right infringement.

Online intermediary liability

The UK government published its Full Consultation Response to the Online Harms White Paper on 15 December 2020, paving the way for a draft Online Safety Bill in 2021. The government has indicated that the draft Bill will be subject to pre-legislative scrutiny.

The German Federal Supreme Court has referred two cases (YouTube and Cyando – see above) to the CJEU asking questions about (among other things) the applicability of the ECommerce Directive hosting protections to UGC sharing sites. The Advocate General’s Opinion in these cases has been published.

Brein v News Service Europe and Puls 4 TV (see above for both) also ask questions around the Article 14 hosting protection, including whether it is precluded if communication to the public is found.

The European Commission published its proposals for a Digital Services Act and a Digital Markets Act on 15 December 2020. The proposed Digital Services Act includes replacements for Articles 12 to 15 of the ECommerce Directive.  The proposals will now proceed through the EU legislative process.

The European Commission’s Proposal for a Regulation on preventing the dissemination of terrorist content online is nearing the final stages of its legislative process, the Council and Parliament having reached political agreement on 10 December 2020. The proposed Regulation is notable for requiring one hour takedown response times and also for proactive monitoring obligations - potentially derogating from the ECommerce Directive Article 15 prohibition on imposing general monitoring obligations on conduits, caches and hosts.

The prospect of a post-Brexit UK-US trade agreement has prompted speculation that such an agreement might require the UK to adopt a provision equivalent to the US S.230 Communications Decency Act. However, if the US-Mexico-Canada Agreement precedent were adopted in such an agreement, that would appear not to follow (as explained here).

Cross-border 

The US and the UK signed a Data Access Agreement on 3 October 2019, providing domestic law comfort zones for service providers to respond to data access demands from authorities located in the other country. No announcement has yet been made that Agreement has entered into operation. The Agreement has potential relevance in the context of a post-Brexit UK data protection adequacy decision by the European Commission.

Discussions continue on a Second Protocol to the Cybercrime Convention, on evidence in the cloud.

State surveillance of communications


The kaleidoscopic mosaic of cases capable of affecting the UK’s 
Investigatory Powers Act 2016 (IP Act) continues to reshape itself. In this field CJEU judgments remain particularly relevant, since they form the backdrop to any data protection adequacy decision that the European Commission might adopt in respect of the UK post-Brexit. The recently agreed UK-EU Trade and Co-operation Agreement provides a period of up to 6 months for the Commission to propose and adopt an adequacy decision.

Relevant CJEU judgments now include, most recently, Privacy International (Case C-623/17), La Quadrature du Net (C-511/18 and C-512/18), and Ordre des barreaux francophones et germanophone (C-520/18) (see discussion here and here).

Domestically, Liberty has a pending judicial review of the IP Act bulk powers and data retention powers. Some EU law aspects (including bulk powers) were stayed pending the Privacy International reference to the CJEU. The Divisional Court rejected the claim that the IP Act data retention powers provide for the general and indiscriminate retention of traffic and location data, contrary to EU law. That point may in due course come before the Court of Appeal.

In the European Court of Human Rights, Big Brother Watch and various other NGOs challenged the pre-IP Act bulk interception regime under the Regulation of Investigatory Powers Act (RIPA). The ECtHR gave a Chamber judgment on 13 September 2018. That and the Swedish Rattvisa case were subsequently referred to the ECtHR Grand Chamber and await judgment. If the BBW Chamber judgment had become final it could have affected the IP Act in as many as three separate ways.

In response to one of the BBW findings the government has said that it will introduce ‘thematic’ certification by the Secretary of State of requests to examine bulk secondary data of individuals believed to be within the British Islands.

Software - goods or services?

Judgment is pending in the CJEU on a referral from the UK Supreme Court asking whether software supplied electronically as a download and not on any tangible medium constitutes goods and/or a sale for the purposes of the Commercial Agents Regulations (C-410/19 Computer Associates (UK) Ltd v The Software Incubator Ltd). The Advocate General’s Opinion was delivered on 17 December 2020.

Law Commission projects

The Law Commission has in train several projects that have the potential to affect online activity.

It is expected to make recommendations on reform of the criminal law relating to Harmful Online Communications in early 2021. The government has said that it will consider, where appropriate, implementing the Law Commission’s final recommendations through the forthcoming Online Safety Bill. The Law Commission issued a consultation paper in September 2020 (consultation closed 18 December 2020).

The Law Commission has also issued a Consultation Paper on Hate Crime Laws, which while not specifically focused on online behaviour inevitably includes it (consultation closed 24 December 2020).

It has recently launched a Call for Evidence on Smart Contracts (closing 31 March 2021) and is also in the early stages of a project on Digital Assets.

Electronic transactions

The pandemic has focused attention on legal obstacles to transacting electronically and remotely. Whilst uncommon in commercial transactions, some impediments do exist and, in a few cases, have been temporarily relaxed. That may pave the way for permanent changes in due course.

Although the question typically asked is whether electronic signatures can be used, the most significant obstacles tend to be presented by surrounding formalities rather than signature requirements themselves. A case in point is the physical presence requirement for witnessing deeds, which stands in the way of remote witnessing by video or screen-sharing. The Law Commission Report on Electronic Execution of Documents recommended that the government should set up an Industry Working Group to look at that and other issues.

Data Protection 

Traditionally this survey does not cover data protection (too big, and a dense specialism in its own right). On this occasion, however, the Lloyd v Google appeal pending in the UK Supreme Court should not pass without notice.

ePrivacy

EU Member States had to implement the Directive establishing the European Electronic Communications Code (EECD) by 21 December 2020. The Code brings ‘over the top’ messaging applications into the scope of ‘electronic communications services’ for the purpose of the EU telecommunications regulatory framework. As a result, the communications confidentiality provisions of the ePrivacy Directive also came into scope, affecting practices such as scanning to detect child abuse images. In order to enable such practices to continue, the European Commission proposed temporary legislation derogating from the ePrivacy Directive prohibitions. The proposed Regulation missed the 21 December deadline and continues through the EU legislative process.

Meanwhile there is as yet no conclusion to the long drawn out attempt to reach consensus on a proposed replacement for the ePrivacy Directive itself. 

[Updated 29 December 2020 to add sections on Data Protection and ePrivacy.] 




Thursday, 17 December 2020

The Online Harms edifice takes shape

The government has now published the Final Response to its Consultation on the April 2019 Online Harms White Paper.

Background

To recap, in the White Paper the government proposed to impose a “duty of care” on companies whose services host user-generated content or facilitate public or private online interaction between users. The duty of care would also apply to search engines.

An intermediary in scope would have to take reasonable steps to prevent, reduce or mitigate harm occurring on its service, including lawful content and activity deemed to be harmful. By its nature the duty placed on the intermediary would be to prevent the risk of one third party user causing harm to someone else.

This proposal differed from offline duties of care in two main respects: First, the White Paper did not limit or define the notion of harm. Comparable safety-related duties of care in the offline world are about objectively ascertainable physical injury and damage to property. An  undefined concept of harm arising from online speech was inevitably subjective and malleable. It raised objections of impermissible vagueness, consequent arbitrariness, and the prospect of online speech being judged by the standard of the most easily offended reader, viewer or listener.

Second, in the offline world a safety-related duty of care that imposes liability for failure to prevent third parties injuring each other is the exception rather than the norm - and in any event has not been applied to speech.

The White Paper proposed that the intermediaries’ duty of care would be overseen and enforced by a discretionary regulator - subsequently indicated as likely to be Ofcom - reminiscent of the world of television and radio. This represented a radical departure from the offline world, in which individual speech is governed only by settled and certain general law, not broadcast-style regulation by regulator.

All this was presented under the banner of offline-online equivalence.

The effect of the proposed Online Harms regime, although presented as regulating the tech companies, is that the regulator would indirectly govern our own individual speech via the proxy of online intermediaries acting under the legal compulsion of the duty of care. If harm were left undefined and unlimited, then the regulator would in effect have the ability to write its own parallel rulebook for online speech – both as to what amounted to harm, and what steps an intermediary should take to mitigate the risk of speech that the regulator deemed to be harmful.

In February 2020 the government published an Initial Response to the White Paper signalling some revisions to the regime, in particular a ‘differentiated’ duty of care that would apply more lightly to content that was harmful but not illegal. There was still no attempt to define or limit the concept of harm.

The government has now confirmed that Ofcom will be the scheme’s discretionary regulator. The Final Response proposes a number of significant changes to the regime described in the White Paper.

Harms in scope

The most significant development is that the government has now:

  • Proposed a general definition of “harmful” content and activity: it must give rise to a “reasonably foreseeable risk of a significant adverse physical or psychological impact on individuals”. [2.2] 
  • Significantly limited what counts as illegal user content and activity for the purposes of the duty of care: excluding civil liability altogether and also limiting the kinds of criminal offences in scope to those that meet the general definition of “harmful” [2.24].

It has also confirmed previous indications that harms to organisations will not be in scope. [2.2, 4.1] Nor would intellectual property breaches, data protection breaches, fraud, breaches of consumer protection law, cyber security breaches or hacking. Harm arising from dark web activity would also be excluded. [2.3]

The combined effect of these steps is that the subject matter of the duty of care has moved in the direction of comparable offline duties of care. It is now more focused towards personal safety properly so-called, rather than resting on unbounded notions of harm. That is also reflected in the new name for the legislation: the Online Safety Bill.

By way of example, the government now explains that disinformation should not be regarded as per se dangerous, and that to do that would trespass unacceptably on freedom of speech:

“the duty of care will apply to content or activity which could cause significant physical or psychological harm to an individual, including disinformation and misinformation. Where disinformation is unlikely to cause this type of harm it will not fall in scope of regulation. Ofcom should not be involved in decisions relating to political opinions or campaigning, shared by domestic actors within the law.” [2.81]

This paragraph recalls the difference of opinion between Home Office and DCMS Ministers over 5G conspiracy theories when giving evidence to the Home Affairs Committee in May 2020.

Nevertheless, the definition of harmful remains problematic: not least because inclusion of ‘psychological impact’ may suggest that the notion of harm is still tied to variable, subjective reactions of different readers. Subjectivity opens the door to application of a standard of the most readily upset user. And while the subject matter of the duty of care may be more closely aligned with traditional duties of care, its nature – a duty to prevent third parties from harming each other – remains the exception, not the norm, in the offline world.

The Final Response proposes the creation, by secondary legislation, of specific ‘priority categories’ of harmful content and criminal offences, posing the greatest risk to individuals. [24], [2.3], [2.20]. The significance of these categories would be in underpinning a reformulated version of the ‘differentiated’ duty of care that was floated in the government’s Initial Response (see further below).

Providers and services in scope

Under the revised proposals, in-scope providers would be split into two categories of provider, subject to versions of the duty of care differing both as to what steps would be required to discharge the duty of care, and in respect of what kinds of harmful content. Only services designated as Category 1 would be duty-bound to address legal but harmful content.

Ofcom would determine which services meet the criteria for Category 1, according to thresholds previously set by the government. The relevant factors would be set out in the legislation: size of audience and functionalities offered.

According to the Response, functionalities such as the ability to share content widely or contact users anonymously are more likely to give rise to harm. [2.16]. When world-wide availability is an inherent feature of the internet, to treat the ability to share content widely as inherently risky is challenging for a government that proclaims that freedom of expression is at the heart of the proposed regulatory framework [1.10]. Contrary to the popular slogan, freedom of reach is indeed an aspect of freedom of speech - as the Supreme Court of India has held:

"There is no dispute that freedom of speech and expression includes the right to disseminate information to as wide a section of the population as is possible. The wider range of circulation of information or its greater impact cannot restrict the content of the right nor can it justify its denial." 

In the offline world, providing a venue specifically for activities that create a risk of danger is one situation in which a duty to prevent visitors injuring each other can arise. But to suggest that merely enabling individuals to speak to a large audience is a dangerously risky activity verges on an existential challenge to freedom of speech.

The Response excludes from scope:

  • certain ‘low-risk’ activities: user comments on digital content in relation to content directly published by a service. This would exclude online product and service reviews and ‘below the line’ reader comments on news website articles. [1.7]
  • three kinds of service: (a) B2B services as previously signalled in the Initial Response, (b) online services managed by educational institutions already subject to sufficient safeguarding duties or expectations, and (c) e-mail, voice telephone and SMS/MMS services. [1.6]

As to (c), the Response observes that “It is not clear what intermediary steps providers could be expected to take to tackle harm on these services before needing to resort to monitoring communications, so imposing a duty of care would be disproportionate.”

The result of the exclusions appears to be that the John Lewis customer review section would now be out of scope, but a site such as Mumsnet would still be in scope.

OTT private messaging services remain in scope [1.5]. The Response takes an approach to those that differs markedly from SMS/MMS services. Messaging providers may be required to monitor communications on private communications services, potentially by two routes.

First, it appears that Ofcom may have discretion to include monitoring in a Code of Practice. (Strictly speaking, however, this would not be mandatory, since it is always open to a provider to demonstrate to Ofcom that it can fulfil its duty of care as effectively in some other way [2.48].) The non-statutory interim code of practice on online child sexual exploitation and abuse (CSEA) published by the Home Office alongside the Response provides that automated technology should be considered on a voluntary basis.

Second, Ofcom would have express power to require companies to use “automated technology that is highly accurate” to identify illegal CSEA content and activity. This power would be usable where alternative measures cannot effectively address CSEA. Whilst the Response comments that this power is more likely to be considered proportionate on public platforms than private services, private services are not excluded. Ofcom would be required to seek approval from Ministers before exercising the power, on the basis that sufficiently accurate tools exist. The Response notes that the government assesses that, currently, sufficiently accurate tools exist to identify CSEA material that has previously been assessed as illegal. [2.59. 2.60]

Encryption is not mentioned in the Response.

News media and journalism The potential application of the legislation to news media and journalism has been fraught from the outset. The White Paper did not mention the issue, following which the then Secretary of State wrote to the Society of Editors assuring them that “where these services are already well regulated, as IPSO and IMPRESS do regarding their members' moderated comment sections, we will not duplicate those efforts. Journalistic or editorial content will not be affected by the regulatory framework.”

This left questions unanswered, for instance the position of mainstream news media not regulated by IPSO or IMPRESS. Nor did it address the position of newspapers’ own social media pages and feeds, which would count as user generated content and thus be indirectly regulated by Ofcom via the intermediaries’ duty of care.

The Final Response is, if anything, less clear than previously. It confirms that comment sections on news publishers’ websites would be out of scope, by virtue of the ‘low risk’ user comments exclusion mentioned above.  For social media feeds, it says that legislation will include ‘robust protections’ for journalistic content shared on in-scope services. As to what those protections might be, and what might count as journalistic content, the Response is silent. [1.10, 1.12]

Differentiated duty of care

The Initial Response proposed a differentiated duty of care, whereby for legal but harmful material and activities in-scope providers would be required only to enforce transparently, consistently and (perhaps) effectively, the standards that they chose to incorporate in their terms and conditions.

It always did seem unlikely that, for ‘legal but harmful’ content, the government intended to leave intermediaries completely to their own devices as to what standards (if any) to incorporate in their user terms and conditions. In 2018, after all, the government had said in its consultation response to the Internet Safety Strategy Green Paper that:

“The government has made clear that we require all social media platforms to have [inter alia]: Terms and conditions that provide a minimum level of safety and protection for users”.]

So it has proved.  The proposal in the Final Response is complex and nuanced. Its main features are:

  • Providers that exceed specified audience and functionality thresholds will be designated as Category 1 providers (see above). 
  • All in-scope providers will be expected to assess whether children are likely to access their services and, if so, to take additional protections for children using them [2.15] 
  • Only Category 1 providers will be required to take action with regard to legal but harmful content and activity accessed by adults [2.15].
  • The duty of care of non-Category 1 providers for adults would therefore apply only in relation to criminal content and activities (of a kind not otherwise excluded) that present a reasonably foreseeable risk of a significant adverse physical or psychological impact on individuals.

It should follow, although the Response does not spell this out completely clearly, that for non-Category 1 providers the general obligations listed below (such as risk assessment) would apply only in relation to the risk of such criminal content activities – and that ‘safety’ should also be understood in that sense. 

For Category 1 providers the general obligations would apply additionally to legal content and activity presenting a reasonably foreseeable risk of a significant adverse physical or psychological impact on individuals. 

General obligations

  • All in-scope providers have a primary responsibility to take action to prevent user-generated content or activity on their services causing significant physical or psychological harm to individuals. To do this they will complete an assessment of the risks associated with their services and take reasonable steps to reduce the risks of the harms they have identified occurring. [2.7]
  • Providers will fulfil the duty of care by putting in place systems and processes that improve user safety on their services – including, for example, user tools, content moderation and recommendation procedures. [2.9]
  • Providers will be required to consider users’ rights, including freedom of expression online, both as part of the risk assessment and when making decisions on what safety systems and processes to put in place. [2.10]
  • Regulation will ensure transparent and consistent application of terms and conditions relating to harmful content. This will include preventing companies from arbitrarily removing content. [2.10]
  • Users must be able to report harm when it does occur and seek redress, challenge wrongful takedown and raise concerns about companies’ compliance with their duties. [2.11]
  • All providers will have a specific legal duty to have effective and accessible reporting and redress mechanisms. This will cover harmful content and activity, infringement of rights (such as over-takedown), or broader concerns about a company’s compliance with its regulatory duties [2.12]
Illegal content and activities
  • For in-scope criminal activity, all providers will need to ensure that illegal content is removed expeditiously and that the risk of it appearing and spreading across their services is minimised by effective systems [2.19]
  • Priority categories of offences, against which providers will be required to take particularly robust action, will be set out in secondary legislation. [2.20] For CSEA and terrorism this may include proactively identifying and blocking or removing this type of material if other steps have not been effective and safeguards are in place. [2.21]

The Response is silent as to how such an obligation may be consistent with the prohibition on general monitoring obligations under Article 15 of the eCommerce Directive. The government has said, in the context of Brexit, that it has no current plans to change the UK’s approach to prohibition on general monitoring requirements.

Legal but harmful content and activity accessed by adults (Category 1 providers only)

  • The legislation will not require removal of specific pieces of legal content [2.28], unless specified as not permitted by the provider’s terms and conditions [2.33] Terms and conditions could be about, for example, labelling and de-prioritising [2.32].
  • Priority categories of legal but harmful material will be set out in secondary legislation. These will be categories of legal but harmful material that Category 1 providers should, at a minimum, address through their terms and conditions. The Response gives the examples of content promoting self-harm, hate content, online abuse that does not meet the threshold of a criminal offence, and content encouraging or promoting eating disorders. [2.29]
  • Category 1 providers will be obliged to state how they will handle other categories of legal but harmful material identified in their risk assessment and make clear what is acceptable on their services for that content. [2.31]

Controversial viewpoints

  • Category 1 companies will not be able to arbitrarily remove controversial viewpoints and users will be able to seek redress if they feel that content has been removed unfairly. [2.34]
  • User redress mechanisms will enable users to challenge content that unduly restricts their freedom of expression. This appears to apply to all in-scope providers (Annex A).

These provisions appear to be the ‘impartiality’ requirements that were trailed in the press before the release of the Final Response, reportedly at the instigation of 10 Downing Street. It is unclear whether these provisions are intended to override substantive policies set out in providers’ terms and conditions. They appear to be unrelated to, or at least to go wider than, issues about illegal or harmful content.

Children

  • All companies in scope will required to assess the likelihood of children accessing their service. [2.36] Only services likely to be accessed by children will be required to provide additional protections for children accessing them, starting with conducting a specific child safety risk assessment. [2.36], [2.37]
  • The government will set out in secondary legislation priority categories of legal but harmful content and activity impacting children, meeting the general definition of harmful content and activity already described. These will be categories impacting children that companies in scope should, at a minimum, take action on. [2.38]
  • Age assurance and age verification technologies are expected to play a key role in fulfilling the duty of care. [2.41]

Codes of Practice

The Final Response has increased the amount of influence that the government will have over Ofcom’s Codes of Practice. Ofcom will be required to send the final draft of a Code of Practice to the Culture Secretary and the Home Secretary, who will have the power to reject a draft code and require the regulator to make modifications for reasons relating to government policy.

Parliament will have the opportunity to debate and vote on the high level objectives set out by the government for the Codes of Practice by the affirmative resolution procedure. Completed codes will be laid in Parliament, subject to negative resolution. [4.10]

Search engines

Little is said in the Final Response about how the proposed duty of care would apply to search engines, beyond a brief summary of actions that they can take to mitigate the risk of harm and proportionate systems and processes that they would be expected to put in place to keep their users safe.

Search engines would need to assess the risk of harm occurring across their entire service. Ofcom would provide guidance specific to search engines regarding regulatory expectation

The government proposes that given the distinct nature of search engines, legislation and codes of practice would include specific material for them. It says that all regulatory requirements would be proportionate, and respect the key role of search engines in enabling access to information online. [1.3]

Territoriality

For the first time, the Final Response has set out the proposed territorial reach of the proposed legislation. Somewhat surprisingly, it appears to propose that services should be subject to UK law on a ‘mere availability of content’ basis. Given the default cross-border nature of the internet, this is tantamount to legislating extraterritorially for the whole world. It would follow that any provider anywhere in the rest of the world would have to geo-fence its service to exclude the UK in order to avoid engaging UK law. Legislating on a mere availability basis has been the subject of criticism over many years since the advent of the internet. [1.1]

Overall commentary

The fundamental issues with the government’s White Paper proposals have been exhaustively discussed on previous occasions. Reminiscent of a sheriff in the Wild West, to which the internet is so often likened, Ofcom would enlist deputies - social media platforms and other intermediaries acting under a legal duty of care - to police the unruly online population. Unlike its Wild West equivalent, however, Ofcom would get to define its territory and write the rules, as well as enforce them.

The introduction of a general definition of harm would tie Ofcom’s hands to some degree in deciding what does and does not constitute harmful speech. Limiting the scope of ‘harm’ to a reasonably foreseeable risk of a significant adverse physical or psychological impact on individuals goes some way to align the proposed duty of care more closely with analogous offline duties of care, which are specifically safety-related.

Nevertheless, when applied in the context of speech there remain significant problems.

1. What is an adverse psychological impact? Does it have to be a medically recognised condition? If not, how wide is it meant to be? Is distress sufficient? The broader the meaning, the closer we come to a limitation that could mean little or nothing more than being upset or unhappy. The less clear the meaning, the more discretion would be vested in Ofcom to decide what counts as harm, and the more likely that providers would err on the side of caution in determining what kinds of content or activity are in scope of their duty of care.

2. The difficulty, not to say virtual impossibility, of the task faced by the regulator and providers should not be underestimated. Thus, for the lawful but harmful category, the government has said that it will include online abuse as a priority category in secondary legislation. However, on the basis of these proposals that must be limited to abuse that falls within the general definition of harm – i.e. abuse that presents a reasonably foreseeable risk of a significant adverse physical or psychological impact on individuals. The provider’s actions under the duty of care should relate only to such harmful abuse. Where, concretely, is the dividing line between abuse that does and does not carry a foreseeable risk of adverse psychological impact? What content falls on either side of the line?

The provider would also have to take into account the proposed obligation not to remove controversial viewpoints and the possibility of user redress for unduly restricting their freedom of expression. Coincidentally, the Divisional Court in Scottow v CPS has in the last few days issued a judgment in which it referred to “the well-established proposition that free speech encompasses the right to offend, and indeed to abuse another”.

These issues illustrate the care that has to be taken with using terms such as ‘online abuse’ to cover everything from strong language, through insults, to criminal threats of violence.

3. What is the threshold to trigger the duty of care? Is it the risk that someone, somewhere, might read something and claim to suffer an adverse psychological impact as a result? Is it a risk gauged according to the notional attributes of a reasonably tolerant hypothetical user, or does the standard of the most easily upset apply? How likely does it have to be that someone might suffer an adverse psychological impact if they read it? Is a reasonably foreseeable, but low, possibility sufficient? 

The Media Minister John Whittingdale, writing in the Daily Mail on the morning of the publication of the Final Response, said:

“This is not about an Orwellian state removal of content or building a ‘woke-net’ where causing offence leads to instant punishment.  Free speech includes the right to offend, and adults will still be free to access content that others may disapprove of.”

If risk and harm thresholds are sufficiently low and subjective, that is what would result.

4. Whatever the risk threshold might be, would it be set out in tightly drawn legislation or left to the discretion of Ofcom? It will not be forgotten that Ofcom, in a 2018 survey, suggested to respondents that ‘bad language’ is a harmful thing. A year later it described “offensive language” as a “potential harm”.

5. Lastly, in the absence of deliberate intent an author owes no duty avoid causing harm to a reader of their work, even though psychological injury may result from reading it. That was confirmed by the Supreme Court in Rhodes. The government’s proposals would therefore mean that an intermediary would have a duty to consider taking steps in relation to material for which the author itself has no duty of care.

These are difficult issues that go to the heart of any proposal to impose a duty of care. They ought to have been the subject of debate over the last couple of years. Unfortunately they have been buried in the rush to include every conceivable kind of harm - however unsuited it might be to the legal instrument of a duty of care - and in discussions of ‘systemic’ duties of care abstracted from consideration of what should and should not amount to harm.

It should be no surprise if the government’s proposals became bogged down in a quagmire resulting from the attempt to institute a universal law of everything, amounting to little more than a vague precept not to behave badly online. The White Paper proposals were a castle built on quicksand, if not thin air.

The proposed general definition of harm, while not perfect, gives some shape to the edifice. It at least sets the stage for a proper debate on the limits of a duty of care, the legally protectable nature of personal safety online, and its relationship to freedom of speech – even if that should have taken place two years ago. Whether regulation by regulator is the appropriate way to supervise and police an appropriately drawn duty of care in relation to individual speech is another matter.



Thursday, 15 October 2020

Hard questions about soft limits

The judgments of the Grand Chamber of the EU Court of Justice in Privacy International (C-623/17) and the joined cases of La Quadrature du Net (C-511/18 and C-512/18) and Ordre des barreax francophones et Germanophone (C-520/18) landed with a reverberating thud on the morning of 6 October 2020.

These referrals, from the UK, France and Belgium, posed questions about the compatibility with EU law of state surveillance legislation in each country. Although differing from each other in some respects, the cases all had in common that they concerned retention, processing or transmission to the authorities not of the content of messages, but contextual ‘communications data’ such as sender, recipient, time of sending, length and type of communication, kind of device and its location.

Adequacy

From a UK perspective, the main interest is in the potential effect of these judgments on the expected decision by the European Commission on the adequacy – or not – of the UK’s regime for protection of personal data.  If the UK is to maintain unhindered flows of personal data from the EU post-Brexit, an adequacy decision will ensure that. Although the UK has largely replicated the GDPR, the UK’s communications surveillance regime will still be relevant to an adequacy decision – as the Schrems 2 litigation over the EU-US Privacy Shield has highlighted.

Although none of last week’s CJEU judgments addressed the current UK communications surveillance framework under the Investigatory Powers Act 2016, the judgments will be closely scrutinised and mapped on to that. The UK government has described the current surveillance regime at Section H of its Explanatory Framework for Adequacy Discussions, produced for the purposes of negotiation with the EU.

The CJEU referrals

The three CJEU cases addressed different kinds of activity that the respective national legislation could authorise and require service providers to undertake. Although the judgments have generally been reported as being about mandatory retention of communications data, they are not limited to that. They also address national legislation requiring automated analysis of communications data to detect terrorism, and real-time feeds to security and intelligence authorities. 

The cases also vary between legislation directly imposing blanket obligations on all service providers, and legislation conferring discretionary powers on national authorities enabling them to require individual service providers to engage in stipulated activities. This is now becoming a critical distinction.

The UK reference concerned Section 94 of the Telecommunications Act 1984. This enabling legislation conferred a general power on a Secretary of State to give directions to providers of public electronic communications networks in the interests of national security or of relations with a foreign government. In November 2015 the UK government publicly acknowledged for the first time that this power had been used to require providers to transfer some kinds of communications data in bulk to the security and intelligence agencies (GCHQ and MI5).  (S.94 has since been repealed and, for this purpose, is superseded by the bulk communications data acquisition warrant under the Investigatory Powers Act 2016.)

The Belgian reference concerned mandatory communications data retention.  The Belgian law in question imposed a blanket obligation on all service providers to retain, for 12 months, various kinds of subscriber, traffic and location data (including both origin and destination of communications). The law then stipulated purposes for and conditions under which various kinds of state authority could issue demands for data to be handed over. Data could be used for a wide variety of criminal investigations.

The French reference, as it related to communication data retention, concerned legislation directly imposing a blanket obligation on all service providers for the purpose of investigating, detecting and prosecuting criminal offences. The reference also dealt with a series of discretionary statutory powers enabling the French authorities to instruct providers to carry out a variety of communications data analysis and reporting activities:

-          For the purpose of preventing terrorism, real-time transfer of communications data relating to a person previously identified as potentially having links to a threat, and to associates of such person believed on substantial grounds to be capable of providing relevant information. (L.851-2)

-          For the purpose of preventing terrorism, automated data processing by the service provider designed, within the parameters laid down in the authorisation, to detect links that might constitute a terrorist threat; and where data has been detected as likely to point to the existence of a terrorist threat, a procedure for authorising identification of the person concerned and collection of the related data. (L.851-3)

-          Real-time transmission to the authorities of technical data relating to the location of terminal equipment for a wide variety of, broadly, security-related purposes. (L.851-4)

Principles

The CJEU articulated a number of points of principle. Of especial relevance are:

-         The same issues of compliance with EU law and the EU Charter of Fundamental Rights that were discussed (for data retention) in Digital Rights Ireland and Tele2/Watson arise with transmission of data to third parties and access to data with a view to its use. (C-623/17 [61])

-         Information that may be provided by profiling using traffic data and location data is no less sensitive than the actual content of communications. (C-623/17 [71]; C-511/18 et al [117], [184])

-         Transmission of traffic data and location data to persons other than users constitutes interference with fundamental rights, regardless of how that data is subsequently used. (C-623/17 [69] and [70])

-         Transmission to public authorities has the effect of making that data available to them. Legislation which permits general and indiscriminate transmission of data to public authorities entails general access. (C-623/17 [79] and [80])

-         The ePrivacy Directive requires that exceptions to confidentiality of communications remain exceptions. Legislation enabling general and indiscriminate transmission of traffic and location data to the authorities renders the exception the rule. That is not permissible. (C-623/17 [69], C-511/18 et al [111], [142])

-         The Charter requirement that any limitation on the exercise of fundamental rights be provided for by law implies that the legal basis which permits the interference with those rights must itself define the scope of the limitation on the exercise of the right concerned. (C-623/17 [65], C-511/18 [175]) (citing Schrems 2, [175])

-         General access to all retained data (including by general and indiscriminate transmission), regardless of whether there is any link, at least indirect, with the aim pursued, cannot be regarded as strictly necessary.  (C-623/17 [78], [80], [81])

-         The objective of safeguarding national security is capable of justifying measures entailing more serious interferences with fundamental rights than might be justified by the other objectives set out in Article 15(1) of the ePrivacy Directive. (C-623/17 [75], C-511/18 et al [136])

-         It is not sufficient for legislation to specify the purpose for which powers may be exercised. It must, by means of clear and precise rules, lay down the substantive and procedural conditions governing the use of the data, thereby ensuring that the interference is limited to what is strictly necessary. (C-623/17 [68], [77]; C-511/18 et al [132], [133], [155], [166] to [168], [176])

Applying the principles

How did these and other principles relied upon by the CJEU translate into EU law compatibility (or otherwise) of the powers under consideration?

First, a cautionary note. The CJEU style of judgment tends towards what might be called ‘opaque clarity’: ringing declarations of high principle, the concrete meaning of which is left for another day. The Court of Appeal has observed: “The CJEU is notorious for making pronouncements resembling those of the oracle at Delphi…”.

A classic example in the present field is the prohibition on “general and indiscriminate retention”, contrasted with ‘targeted retention’. The exact position of the boundary between the two has yet to be discovered. Not only that, but what might have appeared from previous CJEU judgments to be a prohibition of general application now turns out to have context-specific exceptions.

This characteristic of CJEU judgments, especially relevant where the EU Charter is concerned, has to be borne in mind when attempting to extrapolate them to different facts and contexts.

Internal service provider activities versus state access

In these judgments the CJEU drew a high-level distinction between retention and processing activities internal to service providers, and access to data by the authorities.

On the service provider side of the boundary, legislation compelling general and indiscriminate activities is generally precluded. However, the Court indicated some limited situations and purposes for which legislation could mandate service providers to engage in general and indiscriminate retention, or limited to some kinds of communications data (source IP addresses and subscriber identity data), or to undertake automated processing of all communications data retained by them.

By contrast, in no circumstances – or at least none considered by the Court – was it permissible for legislation to provide the authorities with general and indiscriminate access to communications data held by the service providers, including (as with the UK’s Section 94) by mandatory transmission to the authorities.

Blanket obligations versus enabling legislation

The CJEU has previously had no hesitation in holding legislation that directly imposes a blanket data retention obligation on all service providers to be incompatible with EU law. It did that in Tele2/Watson for the Swedish legislation in issue in that case. In these latest cases it has done the same for the French and Belgian blanket data retention legislation.

The position is more nuanced with legislation conferring discretionary powers. The CJEU in Tele2/Watson set out a series of principles applicable to data retention legislation, but stopped short of holding that the then UK data retention legislation (DRIPA) was incompatible with EU law.  That assessment was returned to the UK court. 

DRIPA was structured as enabling legislation, empowering the Secretary of State to issue notices to service providers for up to 12 months. DRIPA required the Secretary of State to consider that issuing a data retention notice was necessary and proportionate for one of the purposes enumerated in the Act. The current IP Act is in similar terms, although additionally requiring the Secretary of State to take into account a number of factors set out in the legislation. A retention notice under the IP Act is also subject to prior approval of an independent Judicial Commissioner.

The question then arises whether, as a matter of EU law, it is sufficient for Member State legislation to require the relevant authorities to exercise a discretionary power in accordance with necessity and proportionality principles, accompanied by safeguards aimed at ensuring that this is achieved. Or must the statute itself set out substantive limits on the exercise of the power?

Two distinct points are in play here: first, could the power in question be exercised in a way that strays into requiring the service provider to undertake illegitimate general and indiscriminate activities? Second, does legislation that relies primarily on obligating observance of principles and establishing safeguards, in preference to setting hard limits on the exercise of a power, satisfy the EU law requirement for clear and precise rules?

Taken to the extreme, could an otherwise insufficiently circumscribed general discretionary power be saved by a provision requiring it to be exercised in accordance with the EU Charter on Fundamental Rights? If the answer to that is ‘No’, then how far must the legislation go in setting substantive limits?

The requirement for clear and precise rules is nominally the same as the European Convention on Human Rights ‘prescribed by law’ test. However, there are indications that the CJEU may be open to taking a stricter approach than does the Strasbourg court. The CJEU at paragraph [124] of the La Quadrature decision refers to taking account of the ECHR as establishing a ‘minimum threshold of protection’.

The IP Act in the English courts

By the time the Watson case returned to the English Court of Appeal, DRIPA had been superseded by the IP Act. Separately, Liberty had commenced proceedings challenging the data retention and bulk powers provisions of the IP Act.  The question of compatibility with EU law was therefore left to be determined in the Liberty proceedings.  In April 2018 the Divisional Court held that the IP Act data retention powers were compatible with EU law.

As to the second point (hard limits), the court did not read the Watson decision as requiring detailed factors (as it described them) to be listed in domestic legislation.  It was sufficient if the legislation permitted decisions to be taken that were (a) sufficiently connected with the objective being pursued (b) strictly necessary and (c) proportionate ([124]), coupled with safeguards so as to achieve effective protection against the risk of misuse of personal data. ([125])

The obligation on the Secretary of State to exercise the power only if she considered it both necessary and proportionate for one or more of the purposes listed in the Act “enshrines in the statute the essence of the tests propounded in Watson”. ([128])

The court found that the limits suggested by the CJEU in Watson (by reference to categories of persons and geographical areas) were not exhaustive or prescriptive. The suggested limits were examples of parameters that could be used according to the facts of a particular situation. ([123]) It would be impractical and unnecessary to set out in detail in legislation the range of factors which might fall to be applied according to the circumstances of different cases ([124]).

As to the first point (general and indiscriminate retention), the court said that it was difficult to conceive how a notice encompassing all communications data in the UK could satisfy the statutory necessity and proportionality tests ([129]); and that it could not possibly be said that the legislation required, or even permitted, a general and indiscriminate retention of communications data ([135]).

Must the Member State make a list?

This approach prompts the question: does the fact that the criteria suggested by the CJEU were not prescriptive or exhaustive mean that a Member State does not have to list in its own legislation a set of conditions constraining the exercise of a discretionary power, so that their compliance with strict necessity can be gauged? Is it sufficient to lay down factors to be taken into account when exercising the power? Would the latter enable the scope of the power to be tested objectively against connection with the objective pursued?

Although the CJEU in Watson observed at [110] that the conditions might vary according to the nature of the measures taken for the purposes of prevention, investigation, detection and prosecution of serious crime, it referred to “substantive conditions which must be satisfied by national legislation”. It went on that such conditions must be shown to be such as actually to circumscribe, in practice, the extent of that measure and, thus, the public affected.

Schrems 2 appears

At the time of the Divisional Court’s Liberty decision the CJEU had not held any enabling legislation to be incompatible with EU law. That has now changed. First, the Schrems 2 decision, albeit considering essential equivalence of US laws with EU personal data protection rather than compatibility of a Member State’s laws, held that certain US enabling provisions did not provide adequate protection of personal data. The limitations on personal data protection were not: “circumscribed in a way that satisfies requirements that are essentially equivalent to those required, under EU law” ([185]).

Schrems 2 emphasised that:

“the requirement that any limitation on the exercise of fundamental rights must be provided for by law implies that the legal basis which permits the interference with those rights must itself define the scope of the limitation on the exercise of the right concerned…” [175] (emphasis added)

Like previous CJEU judgments it distinguished between the legislation itself and a measure that it empowered:

“the legislation in question which entails the interference must lay down clear and precise rules governing the scope and application of the measure in question and imposing minimum safeguards … . It must, in particular, indicate in what circumstances and under which conditions a measure providing for the processing of such data may be adopted, thereby ensuring that the interference is limited to what is strictly necessary.” [176] (emphasis added)

These points were repeated in the recent CJEU judgments, emphasising also that the legislation must be legally binding under domestic law (La Quadrature [132]).

The emphasis on clear and precise conditions set out in the legislation itself raises anew the question whether an approach based primarily on safeguards and oversight of a broad discretionary power is compatible with EU law.

If it remains possible for the discretion to be exercised in a way that results in impermissible general and indiscriminate retention, then EU law is not complied with. 

Further, the more is left to discretion, the less likely it would seem that the criterion of practical effect resulting from substantive conditions would be satisfied:

“the substantive conditions which must be satisfied by national legislation … must be shown to be such as actually to circumscribe, in practice, the extent of that measure and, thus, the public affected.” Tele2/Watson [110]).

This is illustrated by the holdings in Schrems 2 regarding the two specific US surveillance programmes under consideration. The programmes authorised collection of both communications data and content.

The CJEU held that S702 FISA did not itself define the scope of the limitation on the exercise of the right concerned and lay down clear and precise rules governing the scope and application of the measure in question (nor impose minimum safeguards). S702 authorised surveillance programmes rather than individual surveillance measures. The supervisory role of the FISC was designed to verify whether surveillance programmes related to the objective of acquiring foreign intelligence information, not whether individuals were properly targeted to acquire foreign intelligence information.

Similarly it held that PPD‑28, which allowed, in the context of the surveillance programmes based on E.O. 12333, access to data in transit to the United States without that access being subject to any judicial review, did not, in any event, delimit in a sufficiently clear and precise manner the scope of such bulk collection of personal data. It allowed for bulk collection … of a relatively large volume of signals intelligence information or data under circumstances where the Intelligence Community could not use an identifier associated with a specific target to focus the collection.

In those circumstances, the CJEU held that limitations on the protection of personal data arising from the domestic law of the United States on the access and use by US public authorities of such data were not circumscribed in a way that satisfied requirements essentially equivalent to those required under EU law ([185]).

Section 94 - EU law versus ECHR

In the Privacy International case the CJEU's findings appear unavoidably to lead to the conclusion that the UK S.94 enabling legislation was contrary to EU law. Two points are noteworthy:

First, Section 94(2A) stipulated that “The Secretary of State shall not give a direction … unless he believes that the conduct required by the direction is proportionate to what is sought to be achieved by that conduct.” Similar provisions are contained in the IP Act.

Second, the incompatibility ruling applies to S.94 after avowal, publication of Handling Arrangements and commencement of independent oversight in November 2015. For the period following that, the IPT had held that s.94 complied with the ECHR 'provided by law' requirement:

“The ICC concluded … that the relevant agencies had introduced comprehensive procedures, in accordance with the Handling Arrangements, to ensure that they only acquired and retained bulk communications data, and then accessed and undertook analysis of that data, in order to pursue their functions under SSA 1989 or ISA 1994. The essential protection against a potential abuse of power under s.94, namely a requirement that the BCD may only be obtained and used for proper purposes, is thus provided by law, and subject to effective oversight.” [91]

This approach (echoed in the Divisional Court judgment in Liberty discussed above) stands in apparent contrast to the CJEU’s stipulation that:

“legislation cannot confine itself to requiring that authorities’ access to the data be consistent with the objective pursued by that legislation, but must also lay down the substantive and procedural conditions governing that use”. (Privacy International [77], also see La Quadrature [176])

This suggests that legislation should specify the criteria that the authorities must satisfy, and the authorities must decide whether, in a particular situation, the criteria are met – if necessary, backed up by verification and approval by an independent authority.

The CJEU appears to regard the criteria to be met as gateways to be passed through, rather than as factors to be taken into account by the authorities when exercising their discretion. At the least, the judgments appear to lean in the direction of requiring concrete limits to be spelled out in the legislation, rather than left to surrounding safeguards.

It is an open question how far the CJEU’s approach might encompass instruments such as statutory Codes of Practice within ‘legislation’. The answer may depend both on whether they lay down sufficiently clear and precise rules and on whether they pass the ‘legally binding under domestic law’ test. If so, then it would be a question of whether the constraints imposed were sufficient to bring the powers within the relevant substantive limits identified by the CJEU .

The Investigatory Powers Act

How does this approach map on to the Investigatory Powers Act? Looking beyond the end of the Brexit transition period, the significant question will not be compliance with EU law as such, but whether the UK regime provides “essentially equivalent” protection for personal data. However, the two are closely related. Furthermore, the IP Act’s compliance with EU law is one aspect of the pending domestic judicial review by Liberty, which (as regards general and indiscriminate data retention at least) may in due course be considered by the Court of Appeal.

Two aspects of the IP Act that overlap with the CJEU judgments are data retention (Part 4) and the bulk communications data acquisition warrant (Part 6 Chapter 2). The latter is for these purposes the IP Act replacement for Section 94 TA.

However, bulk communications data (known as secondary data) is also collected by means of bulk interception warrants under Part 6 Chapter 1). Even the targeted warrantry regime could be relevant, given the possibility of obtaining ‘thematic’ warrants.

For the sake of simplicity I will focus on two powers: data retention and the bulk communications data acquisition warrant. Both of these are enabling provisions, rather than blanket requirements. 

Hard versus soft limits

Although hedged around with more safeguards than either DRIPA (for data retention) or Section 94 (for bulk communications data acquisition) - including prior approval by a Judicial Commissioner of a notice or warrant respectively - both powers adopt the model of a broad power exercisable for broadly defined purposes.

The data retention power enables the Secretary of State (subject to Judicial Commissioner approval), if she considers the requirement necessary and proportionate for one or more of the purposes set out in the Act, to require a telecommunications operator by notice to retain certain categories of communications data. The notice must not require retention for longer than 12 months. It can specify either a single operator or description of operators, the data to be retained and the retention period.

The power cannot be exercised solely on the basis that the data relates to the activities in the British Islands of a trade union.

The Secretary of State is required to take into account a number of factors before giving a retention notice, including the likely benefits of the notice and the likely number of users (if known) of any telecommunications service to which the notice relates.  

The bulk communications data acquisition power is one of several bulk powers grouped under Part 6 of the IP Act. The Secretary of State may (subject to Judicial Commissioner approval), if she considers it necessary on various national security grounds, issue an intelligence service with a warrant authorising bulk acquisition of communications data. She must consider that the conduct authorised by the warrant is proportionate to what is sought to be achieved by the conduct.

She must also consider that examination of acquitted communications data is or may be necessary for each operational purpose specified in the warrant, in addition to the grounds on which she considered the warrant to be necessary.

Necessity on national security-related grounds cannot be established solely on the basis that the data relates to the activities in the British Islands of a trade union.

A bulk acquisition warrant can be issued for up 6 months, subject to renewal.

A telecommunications operator served with a copy of the warrant is under a duty to take steps to implement the warrant, subject to reasonable practicability.

Various safeguards regarding use of acquired data are stipulated.

The policy of the IP Act

The structure of these powers  reflects an underlying policy to draw the powers widely, then apply safeguards.  David Anderson QC (now Lord Anderson) observed in his August 2016 Bulk Powers Review:

“I have reflected on whether there might be scope for recommending the “trimming” of some of the bulk powers, for example by describing types of conduct that should never be authorised, or by seeking to limit the downstream use that may be made of collected material. But particularly at this late stage of the parliamentary process, I have not thought it appropriate to start down that path. Technology and terminology will inevitably change faster than the ability of legislators to keep up. The scheme of the Bill, which it is not my business to disrupt, is of broad future-proofed powers, detailed codes of practice and strong and vigorous safeguards.”

If the effect of the CJEU decisions is, as already discussed, that a safeguards-heavy and limitations-light approach is not permissible, so that legislation must spell out concrete conditions for the exercise of the power rather than obligations to observe necessity and proportionality and factors to be taken into account, then the scheme of the IP Act bulk communications data retention and acquisition powers, and the arguments that succeeded in the Divisional Court in Liberty, appear to be at risk. For what it may be worth, in 2016 I suggested some limitations that could be applied to the then Bill’s bulk powers.

Beyond that, it should not be forgotten that the UK bulk powers extend to bulk interception of the content of communications. The CJEU in Digital Rights Ireland suggested that a data retention obligation relating to content might adversely affect the essence of the right of privacy under Article 7 of the Charter. The Schrems 2 decision, on the other hand, drew no distinction between content and communications data.

General and indiscriminate?

Do the IP Act data retention and bulk communications data acquisition powers amount to general and indiscriminate data retention and transmission? A blanket requirement directly imposed by legislation on all providers clearly amounts to that. In the context of a power exercisable case by case, what amounts to ‘general and indiscriminate’?

It is evident from the La Quadrature judgment ([172]) that an instruction to a single service provider is capable of being general and indiscriminate if it involves, at the request of the competent national authorities, screening all the traffic and location data retained by a provider. The same is true of the Privacy International judgment, as regards transmission. Equally, the Court endorses the principle of a power that can be exercised only in a sufficiently targeted manner.

What amounts to indiscriminate? At least, lack of objective criteria establishing a connection between the data to be retained, analysed or transmitted and the objective pursued. (La Quadrature [133])

The CJEU suggests that a geographic criterion is capable of amounting to targeted retention, if there is an objectively justifiable reason for selecting the area:

“The limits on a measure providing for the retention of traffic and location data may also be set using a geographical criterion where the competent national authorities consider, on the basis of objective and non-discriminatory factors, that there exists, in one or more geographical areas, a situation characterised by a high risk of preparation for or commission of serious criminal offences … .

Those areas may include places with a high incidence of serious crime, places that are particularly vulnerable to the commission of serious criminal offences, such as places or infrastructure which regularly receive a very high volume of visitors, or strategic locations, such as airports, stations or tollbooth areas.” La Quadrature [150]

If selecting such an area is objectively justified, then presumably it could in principle be legitimate to require all the communications of a purely local provider within that area to be retained. That would not be true if there were no objectively justifiable reason to select that area, in which case the same retention would presumably be indiscriminate.

Whatever the legitimacy of the overall legislative approach adopted, if a Member State (or the UK) wishes to avail itself of the different kinds of power that the CJEU has now held are permissible in certain situations, for certain purposes, in certain factual situations (see further, below), or for certain kinds of data (such as source IP addresses or user identity data), then it seems unavoidable that the state should legislate separately for each variety of power, setting out the conditions that apply to each one.

For instance, as described below the CJEU has set out different conditions applicable to real-time and non-real-time access to data held by service providers. The UK’s Section 94 (and now the bulk communications data acquisition warrant) appear capable of covering real-time, near-real-time and non-real-time transmission, but do not differentiate between them. The CJEU commented on that in the Privacy International judgment:

“Such a disclosure of data by transmission concerns all users of means of electronic communication, without its being specified whether that transmission must take place in real-time or subsequently.” ([52])

Following the Privacy International and La Quadrature CJEU judgments it appears less likely that such lack of differentiation would pass muster.  

As to the different kinds of power under consideration, the CJEU findings in the French and Belgian references (The UK's Section 94 is discussed above) were as follows.

Mandatory data retention

Permissible general and indiscriminate retention For mandatory data retention, the Court reaffirmed the general rule that legislation providing, as a preventive measure, for general and indiscriminate retention of traffic and location data is impermissible.

However, the Court identified certain exceptions. In each case these measures must ensure, by means of clear and precise rules, that the retention of data at issue is subject to compliance with the applicable substantive and procedural conditions and that the persons concerned have effective safeguards against the risks of abuse.

1.       Serious threat instruction for the purposes of safeguarding national security. An instruction for this purpose to retain traffic data and location data generally and indiscriminately is permissible, provided that a situation exists in which the Member State concerned is confronted with a serious threat to national security that is shown to be genuine and either present or foreseeable. The instruction may be given only for a period limited to what is strictly necessary, but which may be extended if that threat persists. The decision imposing such an instruction must be subject to effective review, either by a court or by an independent administrative body whose decision is binding. 

2.      Source IP addresses for the purposes of safeguarding national security, combating serious crime and preventing serious threats to public security. Legislation for these purposes providing for the general and indiscriminate retention of IP addresses assigned to the source of an internet connection is permissible, if the retention is limited to a period limited to what is strictly necessary.

3.      Identity data for the purposes of safeguarding national security, combating crime and safeguarding public security. Legislation for these purposes providing for the general and indiscriminate retention of data relating to the civil identity of users is permissible.

Targeted retention The Court also elaborated on its observations in Tele2/Watson regarding permissible mandatory retention, for the purposes of combating serious crime and preventing serious threats to public security, targeted according to categories of persons and geographic criteria.

Targeted preservation It also addressed expedited targeted preservation. For the purposes of combating serious crime and, a fortiori, safeguarding national security it is permissible to allow recourse to an instruction requiring service providers, by means of a decision of the competent authority that is subject to effective judicial review, to undertake, for a specified period of time, the expedited retention of traffic and location data in their possession.

As with the permissible categories of general and indiscriminate retention, these targeted measures are subject to the requirement for clear and precise rules and effective safeguards against the risks of abuse.

Automated analysis of traffic and location data

This part of the Court’s judgment relates to French L.851-3, mandating automated processing of traffic data and location data by the service provider for the purpose of detecting links that might constitute a terrorist threat. The Court held that such automated analysis, although general and indiscriminate (see para [172]), is permissible provided that a situation exists in which the Member State concerned is facing a serious threat to national security that is shown to be genuine and either present or foreseeable; and that recourse to automated analysis may be the subject of an effective review, either by a court or by an independent administrative body whose decision is binding.

In the course of its judgment the Court expanded, in the context of the French legislation, on steps that should be taken to ensure that pre-established models, criteria and databases are:

- specific and reliable, making it possible to achieve results identifying individuals who might be under a reasonable suspicion of participation in terrorist offences;

- non-discriminatory;

- not based on sensitive personal data in isolation; and

- subject to regular re-examination to ensure they are reliable and up to date.

Further, any positive result should be subject to individual manual re-examination before being acted upon. 

Real-time access

The French measures L.851-2 and L.851-4 both enabled real-time access to traffic and location data: a variety of data in the case of L.851-2 for prevention of terrorism purposes, and technical device location data in the case of L.851-4 for a wide range of, broadly, security purposes.

The data that could be collected under L.851-2 would enable the authorities to monitor “continuously and in real time, the persons with whom those persons are communicating, the means that they use, the duration of their communications and their places of residence and movements. It may also reveal the type of information consulted online.” [184].

As regards L.851-4, the technical data would appear to allow “the department responsible, at any moment throughout the duration of that authorisation, to locate, continuously and in real time, the terminal equipment used, such as mobile telephones.”

The Court emphasised the seriousness of the interference with privacy involved in real-time collection of traffic and location data:

“It must be emphasised that the interference constituted by the real-time collection of data that allows terminal equipment to be located appears particularly serious, since that data provides the competent national authorities with a means of accurately and permanently tracking the movements of users of mobile telephones. To the extent that that data must therefore be considered to be particularly sensitive, real-time access by the competent authorities to such data must be distinguished from non-real-time access to that data, the first being more intrusive in that it allows for monitoring of those users that is virtually total … . The seriousness of that interference is further aggravated where the real-time collection also extends to the traffic data of the persons concerned.” [187]

The Court therefore distinguished between the limits and safeguards applicable to real-time and non-real time access to data. Real-time collection is not precluded for persons in respect of whom there is a valid reason to suspect that they are involved in one way or another in terrorist activities.  

That must be subject to a prior review carried out either by a court or by an independent administrative body whose decision is binding in order to ensure that such real-time collection is authorised only within the limits of what is strictly necessary. In cases of duly justified urgency, the review must take place within a short time.

In this case the Court used the specific language of ‘prior’ review, as opposed to ‘effective’ review.

The Court also emphasised, in the body of its judgment, that a decision authorising the real-time collection of traffic and location data must be based on objective and non-discriminatory criteria provided for in the national legislation and requiring the court or other independent administrative body carrying out the prior review to satisfy itself, inter alia, that such real-time collection is authorised only within the limits of what is strictly necessary.

Non-real-time access

Although not forming part of the operative part of the judgment, the Court commented on the conditions that should apply to non-real time collection. As described in Tele2/Watson: “access can, as a general rule, be granted, in relation to the objective of fighting crime, only to the data of individuals suspected of planning, committing or having committed a serious crime or of being implicated in one way or another in such a crime”.

However: “in particular situations, where for example vital national security, defence or public security interests are threatened by terrorist activities, access to the data of other persons might also be granted where there is objective evidence from which it can be deduced that that data might, in a specific case, make an effective contribution to combating such activities.”

Thus, the Court in this case observed that non-real-time collection would be permissible for persons not suspected of involvement in one way or another in terrorist activities, but only where there is objective evidence from which it can be deduced that that data might, in a specific case, make an effective contribution to combating terrorism.

Conclusion

Back in 2017 (updated in January 2020) I wrote a piece entitled ‘Visions of Adequacy’, in which I suggested that:

“Although concerned with bulk data retention rather than interception or interference, Watson/Tele2 provides pointers to the possible future direction of CJEU decisions. As did Schrems, Watson/Tele2 emphasises the need for differentiation, limitation and exceptions in the light of the objective pursued. This suggests that while appropriately focused and granular bulk powers may be acceptable, blanket bulk powers may not be.

If that is to be the future direction of CJEU caselaw then the IP Act’s bulk powers, which are longer on safeguards than they are on limitations, may be in trouble. …

Statutory bulk powers could be differentiated and limited. Distinctions could be made between, for instance, seeded and unseeded data mining. If pattern recognition and anomaly detection is valuable for detecting computerised cyber attacks, legislation could focus its use on that purpose. Such limitations could prevent it being used for attempts to detect and predict suspicious behaviour in the general population, precrime-style.

Under the Act these distinctions are left to assessments of necessity and proportionality by Ministers and Judicial Commissioners when issuing and approving warrants, buttressed by after the event oversight. These are soft limits, rather than the hard limits that may in future be required for bulk powers to pass muster.”

The latest CJEU decisions reinforce the perception that that is indeed the direction of its caselaw. They raise hard questions about the UK soft limits approach, even before assessing whether the UK powers are substantively compatible with the various categories now articulated by the CJEU.

[Clarificatory amendment to second paragraph of 'Internal service provider activities versus state access', 6 Oct 2020; amendment to clarify that the findings listed in the second half of the post are those in the French and Belgian references, 19 October 2020; correction regarding appeal to Court of Appeal in Liberty proceedings, 20 October 2020.]