Sunday, 7 October 2018

A Lord Chamberlain for the internet? Thanks, but no thanks.

This summer marked the fiftieth anniversary of the Theatres Act 1968, the legislation that freed the theatres from the censorious hand of the Lord Chamberlain of Her Majesty’s Household. Thereafter theatres needed to concern themselves only with the general laws governing speech. In addition they were granted a public good defence to obscenity and immunity from common law offences against public morality.

The Theatres Act is celebrated as a landmark of enlightenment. Yet today we are on the verge of creating a Lord Chamberlain of the Internet. We won't call it that, of course. The Times, in its leader of 5 July 2018, came up with the faintly Orwellian "Ofnet". Speculation has recently renewed that the UK government is laying plans to create a social media regulator to tackle online harm. What form that might take, should it happen, we do not know. We will find out when the government produces a promised white paper.

When governments talk about regulating online platforms to prevent harm it takes no great leap to realise that we, the users, are the harm that they have in mind.

The statute book is full of legislation that restrains speech. Most, if not all, of this legislation applies online as well as offline. Some of it applies more strictly online than offline. These laws set boundaries: defamation, obscenity, intellectual property rights, terrorist content, revenge porn, harassment, incitement to racial and religious hatred and many others. Those boundaries represent a balance between freedom of speech and harm to others. It is for each of us to stay inside the boundaries, wherever they may be set. Within those boundaries we are free to say what we like, whatever someone in authority may think. Independent courts, applying principles, processes and presumptions designed to protect freedom of speech, adjudge alleged infractions according to clear, certain laws enacted by Parliament.

But much of the current discussion centres on something quite different: regulation by regulator. This model concentrates discretionary power in a state agency. In the UK the model is to a large extent the legacy of the 1980s Thatcher government, which started the OF trend by creating OFTEL (as it then was) to regulate the newly liberalised telecommunications market. A powerful regulator, operating flexibly within broadly stated policy goals, can be rule-maker, judge and enforcer all rolled into one.

That may be a long-established model for economic regulation of telecommunications competition, energy markets and the like. But when regulation by regulator trespasses into the territory of speech it takes on a different cast. Discretion, flexibility and nimbleness are vices, not virtues, where rules governing speech are concerned. The rule of law demands that a law governing speech be general in the sense that it applies to all, but precise about what it prohibits. Regulation by regulator is the converse: targeted at a specific group, but laying down only broadly stated goals that the regulator should seek to achieve.
As OFCOM puts it in its recent discussion paper ‘Addressing Harmful Online Content’: “What has worked in a broadcasting context is having a set of objectives laid down by Parliament in statute, underpinned by detailed regulatory guidance designed to evolve over time. Changes to the regulatory requirements are informed by public consultation.”

Where exactly the limits on freedom of speech should lie is a matter of intense, perpetual, debate. It is for Parliament to decide, after due consideration, whether to move the boundaries. It is anathema to both freedom of speech and the rule of law for Parliament to delegate to a regulator the power to set limits on individual speech.

It becomes worse when a document like the government’s Internet Safety Strategy Green Paper takes aim at subjective notions of social harm and unacceptability rather than strict legality and illegality according to the law. ‘Safety’ readily becomes an all-purpose banner under which to proceed against nebulous categories of speech which the government dislikes but cannot adequately define.

Also troubling is the frequently erected straw man that the internet is unregulated. This blurs the vital distinction between the general law and regulation by regulator. Participants in the debate are prone to debate regulation as if the general law did not exist.

Occasionally the difference is acknowledged, but not necessarily as a virtue. The OFCOM discussion paper observes that by contrast with broadcast services subject to long established regulation, some newer online services are ‘subject to little or no regulation beyond the general law’, as if the general law were a mere jumping-off point for further regulation rather than the democratically established standard for individual speech.

OFCOM goes on that this state of affairs was “not by design, but the outcome of an evolving system”. However, a deliberate decision was taken with the Communications Act 2003 to exclude OFCOM’s jurisdiction over internet content in favour of the general law alone.

Moving away from individual speech, the OFCOM paper characterises the fact that online newspapers are not subject to the impartiality requirements that apply to broadcasters as an inconsistency. Different, yes. Inconsistent, no.

Periodically since the 1990s the idea has surfaced that as a result of communications convergence broadcast regulation should, for consistency, apply to the internet. With the advent of video over broadband aspects of the internet started to bear a superficial resemblance to television. The pictures were moving, send for the TV regulator.

EU legislators have been especially prone to this non-sequitur. They are currently enacting a revision of the Audiovisual Media Services Directive that will require a regulator to exercise some supervisory powers over video sharing platforms.

However broadcast regulation, not the rule of general law, is the exception to the norm. It is one thing for a body like OFCOM to act as broadcast regulator, reflecting television’s historic roots in spectrum scarcity and Reithian paternalism. Even that regime is looking more and more anachronistic as TV becomes less and less TV-like. It is quite another to set up a regulator with power to affect individual speech. And it is no improvement if the task of the regulator is framed as setting rules about the platforms’ rules. The result is the same: discretionary control exercised by a state entity (however independent of the government it may be) over users’ speech, via rules that Parliament has not specifically legislated.

It is true, as the OFCOM discussion paper notes, that the line between broadcast and non-broadcast regulation means that the same content can be subject to different rules depending on how it is accessed. If that is thought to be anomalous, it is a small price to pay for keeping regulation by regulator out of areas in which it should not tread.

The House of Commons Media Culture and Sport Committee, in its July 2018 interim report on fake news, recommended that the government should use OFCOM’s broadcast regulation powers, “including rules relating to accuracy and impartiality”, as “a basis for setting standards for online content”. It is perhaps testament to the loss of perspective that the internet routinely engenders that a Parliamentary Committee could, in all seriousness, suggest that accuracy and impartiality rules should be applied to the posts and tweets of individual social media users.

Setting regulatory standards for content means imposing more restrictive rules than the general law. That is the regulator’s raison d’etre. But the notion that a stricter standard is a higher standard is problematic when applied to what we say. Consider the frequency with which environmental metaphors – toxic speech, polluted discourse – are now applied to online speech. For an environmental regulator, cleaner may well be better. The same is not true of speech. Offensive or controversial words are not akin to oil washed up on the seashore or chemicals discharged into a river. Objectively ascertainable physical damage caused by an oil spill bears no relation to a human being evaluating and reacting to the merits and demerits of what people say and write.

If we go further and transpose the environmental precautionary principle to speech we then have prior restraint – the opposite of the presumption against prior restraint that has long been regarded as a bulwark of freedom of expression. All the more surprising then that The Times, in its July Ofnet editorial, should complain of the internet that “by the time police and prosecutors are involved the damage has already been done”. That is an invitation to step in and exercise prior restraint.

As an aside, do the press really think that Ofnet would not before long be knocking on their doors to discuss their online editions? That is what happened when ATVOD tried to apply the Audiovisual Media Services Directive to online newspapers that incorporated video. Ironically it was The Times' sister paper, the Sun, that successfully challenged that attempt.

The OFCOM discussion paper observes that there are “reasons to be cautious over whether [the broadcast regime] could be exported wholesale to the internet”. Those reasons include that “expectations of protection or [sic] freedom of expression relating to conversations between individuals may be very different from those relating to content published by organisations”.

US district judge Dalzell said in 1996: “As the most participatory form of mass speech yet developed, the internet deserves the highest protection from governmental intrusion”. The opposite view now seems to be gaining ground: that we individuals are not to be trusted with the power of public speech, that it was a mistake ever to allow anyone to speak or write online without the moderating influence of an editor, and that by hook or by crook the internet genie must be stuffed back in its bottle.

Regulation by regulator, applied to speech, harks back to the bad old days of the Lord Chamberlain and theatres. In a free and open society we do not appoint a Lord Chamberlain of the Internet – even one appointed by Parliament rather than by the Queen - to tell us what we can and cannot say online, whether directly or via the proxy of online intermediaries. The boundaries are rightly set by general laws.

We can of course debate what those laws should be. We can argue about whether intermediary liability laws are appropriately set. We can consider what tortious duties of care apply to online intermediaries and whether those are correctly scoped. We can debate the dividing line between words and conduct. We can discuss the vexed question of an internet that is both reasonably safe for children and fit for grown-ups. We can think about better ways of enforcing laws and providing victims of unlawful behaviour with remedies. These are matters for public debate and for Parliament and the general law within the framework of fundamental rights. None of this requires regulation by regulator. Quite the opposite.

Nor is it appropriate to frame these matters of debate as (in the words of The Times) “an opportunity to impose the rule of law on a legal wilderness where civic instincts have been suspended in favour of unthinking libertarianism for too long”. People who use the internet, like people everywhere, are subject to the rule of law. The many UK internet users who have ended up before the courts, both civil and criminal, are testament to that. Disagreement with the substantive content of the law does not mean that there is a legal vacuum.

What we should be doing is take a hard look at what laws do and don’t apply online (the Law Commission is already looking at social media offences), revise those laws if need be and then look at how they can most appropriately be enforced.

This would involve looking at areas that it is tempting for a government to avoid, such as access to justice. How can we give people quick and easy access to independent tribunals with legitimacy to make decisions about online illegality? The current court system cannot provide that service at scale, and it is quintessentially a job for government rather than private actors. More controversially, is there room for greater use of powers such as ‘internet ASBOs’ to target the worst perpetrators of online illegality? The existing law contains these powers, but they seem to be little used.

It is hard not to think that an internet regulator would be a politically expedient means of avoiding hard questions about how the law should apply to people’s behaviour on the internet. Shifting the problem on to the desk of an Ofnet might look like a convenient solution. It would certainly enable a government to proclaim to the electorate that it had done something about the internet. But that would cast aside many years of principled recognition that individual speech should be governed by the rule of law, not the hand of a regulator.

If we want safety, we should look to the general law to keep us safe. Safe from the unlawful things that people do offline and online. And safe from a Lord Chamberlain of the Internet.



Thursday, 13 September 2018

Big Brother Watch v UK – implications for the Investigatory Powers Act?

Today I have been transported back in time, to that surreal period following the Snowden revelations in 2013 when anyone who knew anything about the previously obscure RIPA (Regulation of Investigatory Powers Act 2000) was in demand to explain how it was that GCHQ was empowered to conduct bulk interception on a previously unimagined scale.

The answer (explained here) lay in the ‘certificated warrants’ regime under S.8(4) RIPA for intercepting external communications. ‘External’ communications were those sent or received outside the British Islands, thus including communications with one end in the British Islands.

Initially we knew about GCHQ’S TEMPORA programme and, as the months stretched into years, we learned from the Intelligence and Security Committee of the importance to GCHQ of bulk intercepted metadata (related communications data, in RIPA jargon):

“We were surprised to discover that the primary value to GCHQ of bulk interception was not in the actual content of communications, but in the information associated with those communications.” [80] (Report, March 2015)
According to a September 2015 Snowden disclosure, bulk intercepted communications data was processed and extracted into query focused datasets such as KARMA POLICE, containing billions of rows of data. David (now Lord) Anderson QC’s August 2016 Bulk Powers Review gave an indication of some techniques that might be used to analyse metadata, including unseeded pattern analysis.

Once the Investigatory Powers Bill started its journey into legislation the RIPA terminology started to fade. But today it came back to life, with the European Court of Human Rights judgment in Big Brother Watch and others v UK.

The fact that the judgment concerns a largely superseded piece of legislation does not necessarily mean it is of historic interest only. The Court held that both the RIPA bulk interception regime and its provisions for acquiring communications data from telecommunications operators violated Article 8 (privacy) and 10 (freedom of expression) of the European Convention on Human Rights. The interesting question for the future is whether the specific aspects that resulted in the violation have implications for the current Investigatory Powers Act 2016.

The Court expressly did not hold that bulk interception per se was impermissible. But it said that a bulk interception regime, where an agency has broad discretion to intercept communications, does have to be surrounded with more rigorous safeguards around selection and examination of intercepted material. [338]

It is difficult to be categoric about when the absence of a particular feature or safeguard will or will not result in a violation, since the Court endorsed its approach in Zakharov whereby in assessing whether a regime is ‘in accordance with the law’ the Court can have regard to certain factors which are not minimum requirements, such as arrangements for supervising the implementation of secret surveillance measures, any notification mechanisms and the remedies provided for by national law. [320]

That said, the Court identified three failings in RIPA that were causative of the violations. These concerned selection and examination of intercepted material, related communications data, and journalistic privilege.

Selection and examination of intercepted material

The Court held that lack of oversight of the entire selection process, including the selection of bearers for interception, the selectors and search criteria for filtering intercepted communications, and the selection of material for examination by an analyst, meant that the RIPA S. 8(4) bulk interception regime did not meet the “quality of law” requirement under Article 8 and was incapable of keeping the “interference” with Article 8 to what is “necessary in a democratic society”.

As to whether the IPAct suffers from the same failing, a careful study of the Act may lead to the conclusion that when considering whether to approve a bulk interception warrant the independent Judicial Commissioner should indeed look at the entire selection process. Indeed I argued exactly that in a submission to the Investigatory Powers Commissioner. Whether it is clear that that is the case and, even if it is, whether the legislation and supporting public documents are sufficiently clear as to the level of granularity at which such oversight should be conducted, is another matter.

As regards selectors (the Court’s greatest concern), the Court observed that while it is not necessary that selectors be listed in the warrant, mere after the event audit and the possibility of an application to the IPT was not sufficient. The search criteria and selectors used to filter intercepted communications should be subject to independent oversight. [340]

Related communications data

The RIPA safeguards for examining bulk interception product (notably the certificate to select a communication for examination by reference to someone known to be within the British Islands) did not apply to ‘related communications data’ (RCD). RCD is communications data (in practice traffic data) acquired by means of the interception.

The significance of the difference in treatment is increased when it is appreciated that it includes RCD obtained from incidentally acquired internal communications and that there is no requirement under RIPA to discard such material. As the Court noted: “The related communications data of all intercepted communications – even internal communications incidentally intercepted as a “by-catch” of a section 8(4) warrant – can therefore be searched and selected for examination without restriction.” [348]

The RCD regime under RIPA can be illustrated graphically:









In this regard the IPAct is virtually identical. We now have tweaked definitions of ‘overseas-related communications’ and ‘secondary data’ instead of external communications and RCD, but the structure is the same:


















The only substantive additional safeguard is that examination of secondary data has to be for stated operational purposes (which can be broad).

The Court accepted that under RIPA, as the government argued (and had argued in the original IPT proceedings):
“the effectiveness of the [British Islands] safeguard [for examination of content] depends on the intelligence services having a means of determining whether a person is in the British Islands, and access to related communications data would provide them with that means.” [354]
 But it went on:

“Nevertheless, it is a matter of some concern that the intelligence services can search and examine “related communications data” apparently without restriction. While such data is not to be confused with the much broader category of “communications data”, it still represents a significant quantity of data. The Government confirmed at the hearing that “related communications data” obtained under the section 8(4) regime will only ever be traffic data.  
However, … traffic data includes information identifying the location of equipment when a communication is, has been or may be made or received (such as the location of a mobile phone); information identifying the sender or recipient (including copy recipients) of a communication from data comprised in or attached to the communication; routing information identifying equipment through which a communication is or has been transmitted (for example, dynamic IP address allocation, file transfer logs and e-mail headers (other than the subject line of an e-mail, which is classified as content)); web browsing information to the extent that only a host machine, server, domain name or IP address is disclosed (in other words, website addresses and Uniform Resource Locators (“URLs”) up to the first slash are communications data, but after the first slash content); records of correspondence checks comprising details of traffic data from postal items in transmission to a specific address, and online tracking of communications (including postal items and parcels). [355] 

In addition, the Court is not persuaded that the acquisition of related communications data is necessarily less intrusive than the acquisition of content. For example, the content of an electronic communication might be encrypted and, even if it were decrypted, might not reveal anything of note about the sender or recipient. The related communications data, on the other hand, could reveal the identities and geographic location of the sender and recipient and the equipment through which the communication was transmitted. In bulk, the degree of intrusion is magnified, since the patterns that will emerge could be capable of painting an intimate picture of a person through the mapping of social networks, location tracking, Internet browsing tracking, mapping of communication patterns, and insight into who a person interacted with. [356]

Consequently, while the Court does not doubt that related communications data is an essential tool for the intelligence services in the fight against terrorism and serious crime, it does not consider that the authorities have struck a fair balance between the competing public and private interests by exempting it in its entirety from the safeguards applicable to the searching and examining of content. While the Court does not suggest that related communications data should only be accessible for the purposes of determining whether or not an individual is in the British Islands, since to do so would be to require the application of stricter standards to related communications data than apply to content, there should nevertheless be sufficient safeguards in place to ensure that the exemption of related communications data from the requirements of section 16 of RIPA is limited to the extent necessary to determine whether an individual is, for the time being, in the British Islands.” [357]

 This is a potentially significant holding. In IPAct terms this would appear to require that selection for examination of secondary data for any purpose other than determining whether an individual is, for the time being, in the British Islands should be subject to different and more stringent limitations and procedures.

It is also noteworthy that, unlike RIPA, the IP Act contains provisions enabling some categories of content to be extracted from intercepted communications and treated as secondary data.

Journalistic privilege

 The Court found violations of Article 10 under both the bulk interception regime and the regime for acquisition of communications data from telecommunications service providers.

For bulk interception, the court focused on lack of protections at the selection and examination stage: “In the Article 10 context, it is of particular concern that there are no requirements – at least, no “above the waterline” requirements – either circumscribing the intelligence services’ power to search for confidential journalistic or other material (for example, by using a journalist’s email address as a selector), or requiring analysts, in selecting material for examination, to give any particular consideration to whether such material is or may be involved. Consequently, it would appear that analysts could search and examine without restriction both the content and the related communications data of these intercepted communications.” [493]

For communications data acquisition, the court observed that the protections for journalistic privilege only applied where the purpose of the application was to determine a source; they did not apply in every case where there was a request for the communications data of a journalist, or where such collateral intrusion was likely. [499]

This may have implications for those IPAct journalistic safeguards that are limited to applications made ‘for the purpose of’ intercepting or examining journalistic material or sources.




Tuesday, 5 June 2018

Regulating the internet – intermediaries to perpetrators

Nearly twenty five years after the advent of the Web, and longer since the birth of the internet, we still hear demands that the internet should be regulated - for all the world as if people who use the internet were not already subject to the law. The May 2017 Conservative manifesto erected a towering straw man: “Some people say that it is not for government to regulate when it comes to technology and the internet. We disagree.”  The straw man even found its way into the title of the current House of Lords Communications Committee inquiry: "The Internet: to regulate or not to regulate?".

The choice is not between regulating or not regulating.  If there is a binary choice (and there are often many shades in between) it is between settled laws of general application and fluctuating rules devised and applied by administrative agencies or regulatory bodies; it is between laws that expose particular activities, such as search or hosting, to greater or less liability; or laws that visit them with more or less onerous obligations; it is between regimes that pay more or less regard to fundamental rights; and it is between prioritising perpetrators or intermediaries.

Such niceties can be trampled underfoot in the rush to do something about the internet. Existing generally applicable laws are readily overlooked amid the clamour to tame the internet Wild West, purge illegal, harmful and unacceptable content, leave no safe spaces for malefactors and bring order to the lawless internet.

A recent article by David Anderson Q.C. asked the question 'Who governs the Internet?' and spoke of 'subjecting the tech colossi to the rule of law'. The only acceptable answer to the ‘who governs?’ question is certainly 'the law'. We would at our peril confer the title and powers of Governor of the Internet on a politician, civil servant, government agency or regulator. But as to the rule of law, we should not confuse the existence of laws with disagreement about what, substantively, those laws should consist of. Bookshops and magazine distributors operate, for defamation, under a liability system with some similarities to the hosting regime under the Electronic Commerce Directive. No-one has, or one hopes, would suggest that as a consequence they are not subject to the rule of law.

It is one thing to identify how not to regulate, but it would be foolish to deny that there are real concerns about some of the behaviour that is to be found online. The government is currently working towards a White Paper setting out proposals for legislation to tackle “a range of both legal and illegal harms, from cyberbullying to online child sexual exploitation”. What is to be done about harassment, bullying and other abusive behaviour that is such a significant contributor to the current furore?

Putting aside the debate about intermediary liability and obligations, we could ask whether we are making good enough use of the existing statute book to target perpetrators. The criminal law exists, but can be seen as a blunt instrument. It was for good reason that the Director of Public Prosecutions issued lengthy prosecutorial guidelines for social media offences.

Occasionally the idea of an ‘Internet ASBO’ has been floated. Three years ago a report of the All-Party Parliamentary Inquiry into Antisemitism recommended, adopting an analogy with sexual offences prevention orders, that the Crown Prosecution Service should undertake a “review to examine the applicability of prevention orders to hate crime offences and if appropriate, take steps to implement them.” 

A possible alternative, however, may lie elsewhere on the statute book. The Anti-Social Behaviour, Crime and Policing Act 2014 contains a procedure for some authorities to obtain a civil anti-social behaviour injunction (ASBI) against someone who has engaged or threatens to engage in anti-social behaviour, meaning “conduct that has caused, or is likely to cause, harassment, alarm or distress to any person”. That succintly describes the kind of online behaviour complained of.

Nothing in the legislation restricts an ASBI to offline activities. Indeed over 10 years ago The Daily Telegraph reported an 'internet ASBO' made under predecessor legislation against a 17 year old who had been posting material on the social media platform Bebo, banning him from publishing material that was threatening or abusive and promoted criminal activity.  

ASBIs raise difficult questions of how they should be framed and of proportionality, and there may be legitimate concerns about the broad terms in which anti-social behaviour is defined. Nevertheless the courts to which applications are made have the societal and institutional legitimacy, as well as the experience and capability, to weigh such factors.

The Home Office Statutory Guidance on the use of the 2014 Act powers (revised in December 2017) makes no mention of their use in relation to online behaviour.  That could perhaps usefully be revisited. Another possibility might be to explore extending the ability to apply for an ASBI beyond the authorities, for instance to some voluntary organisations. 

Whilst the debate about how to regulate internet activities and the role of intermediaries is not about to go away, we should not let that detract from the importance of focusing on remedies against the perpetrators themselves.

Monday, 30 April 2018

The Electronic Commerce Directive – a phantom demon?

Right now the ECommerce Directive – or at any rate the parts that shield hosting intermediaries from liability for users’ content - is under siege. The guns are blazing from all directions: The Prime Minister’s speech in Davos, Culture Secretary Matt Hancock’s speech at the Oxford Media Convention on 12 March 2018 and the European Commission’s Recommendation on Tackling Illegal Content Online all take aim at the shield, or at its linked bar on imposing general monitoring obligations on conduits, caches and hosts. The proposed EU Copyright Directive is attacking from the flanks.

The ECommerce Directive is, of course, part of EU law. As such the UK could, depending on what form Brexit takes, diverge from it post-Brexit. The UK government has identified the Directive as a possible divergence area and Matt Hancock's Department for Digital, Culture, Media and Sport (DCMS) is looking at hosting liability.
The status quo

Against this background it is worth looking behind the polarised rhetoric that characterises this topic and, before we decide whether to take a wrecking ball to the Directive's liability provisions, take a moment to understand how they work.  As so often with internet law, the devil revealed by the detail is a somewhat different beast from that portrayed in the sermons.
We can already sense something of that disparity. In her Davos speech Theresa May said:
“As governments, it is also right that we look at the legal liability that social media companies have for the content shared on their sites. The status quo is increasingly unsustainable as it becomes clear these platforms are no longer just passive hosts.”
If this was intended to question existing platform liability protections, it was a curious remark. Following the CJEU decisions in LVMH v Google France and L’Oreal v eBay, if a hosting platform treats user content non-neutrally it will not have liability protection for that content. By non-neutrally the CJEU means that the operator "plays an active role of such a kind as to give it knowledge of, or control over, those data".

So the status quo is that if a platform does not act neutrally as a passive host it is potentially exposed to legal liability.
By questioning the status quo did the Prime Minister mean to advocate greater protection for platforms who act non-neutrally than currently exists? In the febrile atmosphere that currently surrounds social media platforms that seems unlikely, but it could be the literal reading of her remarks. If not, is it possible that the government is taking aim at a phantom?
Matt Hancock's speech on 12 March added some detail:

"We are looking at the legal liability that social media companies have for the content shared on their sites. Because it’s a fact on the web that online platforms are no longer just passive hosts.
But this is not simply about applying publisher or broadcaster standards of liability to online platforms.
There are those who argue that every word on every platform should be the full legal responsibility of the platform. But then how could anyone ever let me post anything, even though I’m an extremely responsible adult?
This is new ground and we are exploring a range of ideas… including where we can tighten current rules to tackle illegal content online… and where platforms should still qualify for ‘host’ category protections."
It is debatable whether this is really new ground when these issues have been explored since the advent of bulletin boards and then the internet. Nevertheless there can be no doubt that the rise of social media platforms has sparked off a new round of debate.
  
Sectors, platforms and activities

The activities of platforms are often approached as if they constitute a homogenous whole: the platform overall is either a passive host or it is not. Baroness Kidron, opening the House of Lords social media debate on 11 January 2018, went further, drawing an industry sector contrast between media companies and tech businesses:
“Amazon has set up a movie studio. Facebook has earmarked $1 billion to commission original content this year. YouTube has fully equipped studios in eight countries."
She went on:  

"The Twitter Moments strand exists to “organize and present compelling content”. Apple reviews every app submitted to its store, “based on a set of technical, content, and design criteria”. By any other frame of reference, this commissioning, editing and curating is for broadcasting or publishing.”
However the ECommerce Directive does not operate at a business sector level, nor at the level of a platform treated as a whole. It operates at the level of specific activities and items of content. If an online host starts to produce its own content like a media company, then it will not have the protection of the Directive for that activity. Nor will it have protection for user content that it selects and promotes so as to have control over it.  Conversely if a media or creative company starts to host user-generated content and treats it neutrally, it will have hosting protection for that activity.  

In this way the Directive adapts to changes in behaviour and operates across business models. It is technology-neutral and business sector-agnostic. A creative company that develops an online game or virtual world will have hosting protection for what users communicate to each other in-world and for what they make using the tools provided to them.
The line that the Directive draws is not between media and tech businesses, nor between simple and complex platforms, but at the fine-grained level of individual items of content. The question is always whether the host has intervened at the level of a particular item of content to the extent that (in the words of one academic)[1], it might be understood to be their own. If it does that, then the platform will not have hosting protection for that item of content. It will still have protection for other items of user-generated content in relation to which it has remained neutral.  The scheme of the Directive is illustrated in this flowchart.


The analysis can be illustrated by an app such as one that an MP might provide for the use of constituents. Videos made by the MP would be his or her own content, not protected by the hosting provisions. If the app allows constituents to post comments to a forum, those would attract hosting protection. If the MP selected and promoted a comment as Constituent Comment of the Day, he or she would have intervened sufficiently to lose hosting protection for that comment.

This activity-based drawing of the line is not an accident. It was the declared intention of the promoters of the Directive. The European Commission said in its Proposal for the Directive back in 1998:
"The distinction as regards liability is not based on different categories of operators but on the specific types of activities undertaken by operators. The fact that a provider qualifies for an exemption from liability as regards a particular act does not provide him with an exemption for all his other activities." 
Courts in Ireland (Mulvaney v Betfair), the UK (Kaschke v Gray, England and Wales Cricket Board v Tixdaq) and France (TF1 v Dailymotion) have reached similar conclusions (albeit in Tixdaq only a provisional conclusion).  Most authoritatively, the CJEU in L'Oreal v eBay states that a host that has acted non-neutrally in relation to certain data cannot rely on the hosting protection in the case of those data (judgment, para [116] - and see flowchart above).

The report of the Committee on Standards in Public Life on "Intimidation in Public Life" also discussed hosting liability.  It said:
“Parliament should reconsider the balance of liability for social media content. This does not mean that the social media companies should be considered fully to be the publishers of the content on their sites. Nor should they be merely platforms, as social media companies use algorithms that analyse and select content on a number of unknown and commercially confidential factors.”
Analysing and selecting user content so as to give the operator control over the selected content would exclude that content from hosting protection under the ECommerce Directive. The Committee's suggestion that such activities should have a degree of protection short of full primary publisher liability would seem to involve increasing, not decreasing, existing liability protection. That is the opposite of what, earlier in the Report, the Committee seemed to envisage would be required: “The government should seek to legislate to shift the balance of liability for illegal content to the social media companies away from them being passive ‘platforms’ for illegal content.”

Simple and complex platforms
The question of whether a hosting platform has behaved non-neutrally in relation to any particular content is also unrelated to the simplicity or complexity of the platform. The Directive has been applied to vanilla web hosting and structured, indexed platforms alike.  That is consistent with the contextual background to the Directive, which included court decisions on bulletin boards (in some ways the forerunners of today’s social media sites) and the Swedish Bulletin Boards Act 1998.

The fact that the ECD encompasses simple and complex platforms alike leads to a final point: the perhaps underappreciated variety of activities that benefit from hosting protection.  They include, as we have seen, online games and virtual worlds. They would include collaborative software development environments such as GitHub. Cloud-based word processor applications, any kind of app with a user-generated content element, website discussion forums, would all be within scope. By focusing on activities defined in a technology-neutral way the Directive has transcended and adapted to many different evolving industries and kinds of business.
The voluntary sector

Nor should we forget the voluntary world. Community discussion forums are (subject to one possible reservation) protected by the hosting shield.  The reservation is that the ECD covers services of a kind ‘normally provided for remuneration’. The reason for this is that the ECD was an EU internal market Directive, based on the Services title of the TFEU. As such it had to be restricted to services with an economic element. 
In line with EU law on the topic the courts have interpreted this requirement generously. Nevertheless there remains a nagging doubt about the applicability of the protection to purely voluntary activities.  The government could do worse than consider removing the "normally provided for remuneration" requirement so that the Mumsnets, the sports fan forums, the community forums of every kind can clearly be brought within the hosting protection.

[Amended 28 July 2018 with comment added after Matt Hancock quotation and addition of hosting liability flowchart.]




[1]               C. Angelopoulos, 'On Online Platforms and the Commission’s New Proposal for a Directive on Copyright in the Digital Single Market' (January 2017).

Friday, 27 April 2018

The IPAct data retention regime lives on (but will have to change before long)


The High Court gave judgment this morning on Liberty’s challenge to the mandatory communications data retention provisions of the Investigatory Powers Act (IPAct). 

The big questions in the Liberty case were:
  • What does the government have to do make the IPAct comply with EU law following the Tele2/Watson decision of the CJEU?
  • Has the government done enough in its proposed amendments to the IPAct, designed to address two admitted grounds of non-compliance with EU law?
  • When does it have to make changes?


In brief, the court has made a finding of non-compliance with EU law limited to the two grounds admitted by the government.  The court declared that Part 4 of the Investigatory Powers Act 2016 is incompatible with fundamental rights in EU law in that in the area of criminal justice:
(1) access to retained data is not limited to the purpose of combating “serious crime”; and
(2) access to retained data is not subject to prior review by a court or an independent administrative body.

As to timing to make changes, Liberty argued for no later than 31 July 2018 and the government for no earlier than 1 April 2019. The court decided that 1 November 2018 would be a reasonable time in which to amend the legal framework (albeit with a suggestion that practical implementation might take longer). In the meantime the existing IPAct data retention regime remains in effect, although lacking the two limitations and safeguards that have led to the admitted non-compliance with EU law.

The court observed, having noted that the question of appropriate remedy took the court into ‘deep constitutional waters’:
“… we are not prepared to contemplate the grant of any remedy which would have the effect, whether expressly or implicitly, of causing chaos and which would damage the public interest.
Nor do we consider that any coercive remedy is either necessary or appropriate. This is particularly so in a delicate constitutional context, where what is under challenge is primary legislation and where the Government proposes to introduce amending legislation which, although it will be in the form of secondary legislation rather than primary, will be placed before Parliament for the affirmative resolution procedure to be adopted.
On the other hand it would not be just or appropriate for the Court simply to give the Executive a carte blanche to take as long as it likes in order to secure compliance with EU law. The continuing incompatibility with EU law is something which needs to be remedied within a reasonable time. As long ago as July 2017 the Defendants conceded that the existing Act is incompatible with EU law in two respects.”

Turning to the main remaining grounds relied upon by Liberty:

1. Perhaps of greatest significance, the court rejected Liberty’s argument that the question of whether the legislation fell foul of the Tele2/Watson prohibition on general and indiscriminate retention of communications data should be referred to the CJEU. It noted a number of differences from the Swedish legislation considered in Tele2/Watson and concluded:

“In the light of this analysis of the structure and content of Part 4 of the 2016 Act, we do not think it could possibly be said that the legislation requires, or even permits, a general and indiscriminate retention of communications data. The legislation requires a range of factors to be taken into account and imposes controls to ensure that a decision to serve a retention notice satisfies (inter alia) the tests of necessity in relation to one of the statutory purposes, proportionality and public law principles.” The court declined to refer the point to the CJEU.

2. The question of whether national security is within the scope of the CJEU Watson decision would be stayed pending the CJEU’s decision in the reference from the Investigatory Powers Tribunal in the Privacy International case. The court declined to make a reference to the CJEU in these proceedings.

3. Liberty argued that a ‘seriousness’ threshold should apply to all other objectives permitted under Article 15(1) of the EU ePrivacy Directive, not just to crime. The court held that other than for criminal offences the fact that national legislation does not impose a “seriousness” threshold on a permissible objective for requiring the retention of data (or access thereto) does not render that legislation incompatible with EU law and that necessity and proportionality were adequate safeguards. It declined to refer the point to the CJEU.

4. A highly technical point about whether the CJEU Watson decision applied to ‘entity data’ as defined in the IPAct, or only to ‘events data’, was resolved in favour of the government.

5. Liberty argued that retention purposes concerned with protecting public health, tax matters, and regulation of financial services/markets and financial stability should be declared incompatible. The court declined to grant a remedy since the government intends to remove those purposes anyway.

6. As to whether mandatorily retained data has to be held within the EU, the court stayed that part of the claim pending the CJEU’s decision in the IPT reference in the Privacy International case.

7. The part of the claim regarding notification of those whose data has been accessed was also stayed pending the CJEU’s decision in the IPT reference in the Privacy International case.

By way of background to the decision, the IPAct was the government’s replacement for DRIPA, the legislation that notoriously was rushed through Parliament in 4 days in July 2014 following the CJEU’s nullification of the EU Data Retention Directive in Digital Rights Ireland.

DRIPA expired on 31 December 2016. But even as the replacement IPAct provisions were being brought into force it was obvious that they would have to be amended to comply with EU law, following the CJEU decision in Tele2/Watson issued on 21 December 2016.

A year then passed before the government published a consultation on proposals to amend the IPAct, admitting that the IPAct was non-compliant with EU law on the two grounds of lack of limitation to serious crime and lack of independent prior review of access requests. 

That consultation closed on 18 January 2018. Today’s judgment noted the government’s confirmation that legislation is due to be considered by Parliament before the summer recess in July 2018.

In the consultation the government set out various proposals designed to comply with Tele2/Watson:

-         A new body (the Office of Communications Data Authorisations) would be set up to give prior independent approval of communications data requests. These have been running at over 500,000 a year.

-         Crime-related purposes for retaining or acquiring events data would be restricted to serious crime, albeit broadly defined.

-         Removal of retention and acquisition powers for public health, tax collection and regulation of financial markets or financial stability.

The government's proposals were underpinned by some key interpretations of Tele2/Watson. The government contended in the consultation that:

-         Tele2/Watson does not apply to national security, so that requests by MI5, MI6 and GCHQ would still be authorised internally. That remains an outstanding issue pending the Privacy International reference to the CJEU from the IPT.

-         The current notice-based data retention regime is not 'general and indiscriminate'. It considered that Tele2/Watson's requirement for objective targeted retention criteria could be met by requiring the Secretary of State to consider, when giving a retention notice to a telecommunications operator, factors such as whether restriction by geography or by excluding a group of customers are appropriate.  Today’s Liberty decision has found in the government’s favour on that point. Exclusion of national security apart, this is probably the most fundamental point of disagreement between the government and its critics.

-         Tele2/Watson applies to traffic data but not subscriber data (events data but not entity data, in the language of the Act). Today’s decision upholds the government’s position on that.

-         Tele2/Watson does not preclude access by the authorities to mandatorily retained data for some non-crime related purposes (such as public safety or preventing death, injury, or damage to someone's mental health). That was not an issue in today’s judgment.

As to notification, the government considered that the existing possibilities under the Act are sufficient. It also considered that Tele2/Watson did not intend to preclude transfers of mandatorily retained data outside the EU where an adequate level of protection exists. These remain outstanding issues pending the Privacy International reference to the CJEU from the IPT.


Sunday, 1 April 2018

It’s no laughing matter - the case for regulating humour


The fallout from the Count Dankula ‘Nazi pug’ video prosecution shows no sign of abating.  While many have condemned the conviction as an assault on freedom of speech, others are saying that the law does not go far enough.  They argue that the criminal law only catches these incidents after the event when the harm has already been done. How can we prevent the harm being done in the first place?

“It is like pollution”, said one commentator. “We apply the precautionary principle to environmental harm, and we should do the same to prevent the toxic effects of tasteless, offensive and unfunny jokes on the internet. Freedom of speech is paramount, but we must not let that get in the way of doing what is right for society.”

The internet has only exacerbated the problem, say government sources. “So-called jokes going viral on social media are a scourge of society. Social media platforms have the resources to weed this out. They must do more, but so must society. Of course we have no quarrel with occasional levity, but serious humour such as satire is too dangerous to be left to the unregulated private sector. We would like to see this addressed by a self-regulatory code of conduct, but we are ready to step in with legislation if necessary.”

One professional comedian said: ‘This reaches a crisis point on 1 April each year, when tens of thousands of self-styled humourists try their hand at a bit of amateur prankstering. Who do they think they are fooling? An unthinking quip can have devasting consequences for the poor, the vulnerable, and for society at large. This is no joke. Controversial humour should be in the hands of properly qualified and trained responsible professionals.”

An academic added: “Humour is a public good. You only have to look at the standard of jokes on the internet to realise that the market is, predictably, failing to supply quality humour. We are in a race to the bottom. Since humour can also have significant negative externalities, the case for regulation is overwhelming.”

So there appears to be a growing consensus. Will we see a professional corps of licensed comedians?  Will amateur jokers find themselves in jail? Has this blogger succeeded only in proving that parody should be left to those who know what they are doing? Only time will tell.