Tuesday 30 October 2018

What will be in Investigatory Powers Act Version 1.2?

Never trust version 1.0 of any software. Wait until the bugs have been ironed out, only then open your wallet.

The same is becoming true of the UK’s surveillance legislation.  No sooner was the ink dry on the Investigatory Powers Act 2016 (IP Act) than the first bugs, located in the communications data retention module, were exposed by the EU Court of Justice (CJEU)’s judgment in Tele2/Watson

After considerable delay in issuing required fixes, Version 1.1 is currently making its way through Parliament. The pending amendments to the Act make two main changes. They restrict to serious crime the crime-related purposes for which the authorities may demand access to mandatorily retained data, and they introduce prior independent authorisation for non-national security demands.

It remains uncertain whether more changes to the data retention regime will be required in order to comply with the Tele2/Watson judgment.  That should become clearer after the outcome of Liberty’s appeal to the Court of Appeal in its judicial review of the Act and various pending references to the CJEU.

Meanwhile the recent Strasbourg judgment in Big Brother Watch v UK (yet to be made final, pending possible referral to the Grand Chamber) has exposed a separate set of flaws in the IP Act’s predecessor legislation, the Regulation of Investigatory Powers Act 2000 (RIPA). These were in the bulk interception and communications data acquisition modules. To the extent that the flaws have been carried through into the new legislation, fixing them may require the IP Act to be patched with a new Version 1.2.

The BBW judgment does not read directly on to the IP Act. The new legislation is much more detailed than RIPA and introduces the significant improvement that warrants have to be approved by an independent Judicial Commissioner.  Nevertheless, the BBW judgment contains significant implications for the IP Act. 

The Court found that three specific aspects of RIPA violated the European Convention on Human Rights:
  • Lack of robust end to end oversight of bulk interception acquisition, selection and searching processes
  • Lack of controls on use of communications data acquired from bulk interception
  • Insufficient safeguards on access to journalistically privileged material, under both the bulk interception regime and the ordinary communications data acquisition regime

End to end oversight

The bulk interception process starts with selection of the bearers (cables or channels within cables) that will be tapped.  It culminates in various data stores that can be queried by analysts or used as raw material for computer analytics. In between are automated processes for filtering, selecting and analysing the material acquired from the bearers. Some of these processes operate in real time or near real time, others are applied to stored material and take longer. Computerised processes will evolve as available technology develops.

The Court was concerned about lack of robust oversight under RIPA throughout all the stages, but especially selection and search criteria used for filtering. Post factum audit by the Interception of Communications Commissioner was judged insufficient.

For its understanding of the processes the Court relied upon a combination of sources: the Interception Code of Practice under RIPA, the Intelligence and Security Committee Report of March 2015, the Investigatory Powers Tribunal judgment of 5 December 2014 in proceedings brought by Liberty and others, and the Government’s submissions in the Strasbourg proceedings. The Court described the processes thus:

“…there are four distinct stages to the section 8(4) regime:

1.  The interception of a small percentage of Internet bearers, selected as being those most likely to carry external communications of intelligence value.
2.  The filtering and automatic discarding (in near real-time) of a significant percentage of intercepted communications, being the traffic least likely to be of intelligence value.
3.  The application of simple and complex search criteria (by computer) to the remaining communications, with those that match the relevant selectors being retained and those that do not being discarded.
4.  The examination of some (if not all) of the retained material by an analyst).”

The reference to a ‘small percentage’ of internet bearers derives from the March 2015 ISC Report. Earlier in the judgment the Court said:

“… GCHQ’s bulk interception systems operated on a very small percentage of the bearers that made up the Internet and the ISC was satisfied that GCHQ applied levels of filtering and selection such that only a certain amount of the material on those bearers was collected.”

Two points about this passage are worthy of comment. First, while the selected bearers may make up a very small percentage of the estimated 100,000 bearers that make up the global internet (judgment, [9]), that is not same thing as the percentage of bearers that land in the UK.

Second, the ISC report is unclear about how far, if at all, filtering and selection processes are applied not just to content but also to communications data (metadata) extracted from intercepted material. Whilst the report describes filtering, automated searches on communications using complex criteria and analysts performing additional bespoke searches, it also says:

Related CD (RCD) from interception: GCHQ’s principal source of CD is as a by-product of their interception activities, i.e. when GCHQ intercept a bearer, they extract all CD from that bearer. This is known as ‘Related CD’. GCHQ extract all the RCD from all the bearers they access through their bulk interception capabilities.” (emphasis added)

The impression that collection of related communications data may not be filtered is reinforced by the Snowden documents, which referred to several databases derived from bulk interception and which contained very large volumes of non-content events data. The prototype KARMA POLICE, a dataset focused on website browsing histories, was said to comprise 17.8 billion rows of data, representing 3 months’ collection. (The existence or otherwise of KARMA POLICE and similar databases has not been officially acknowledged, although the then Interception of Communications Commissioner in his 2014 Annual Report reported that he had made recommendations to interception agencies about retention periods for related communications data.)

The ISC was also “surprised to discover that the primary value to GCHQ of bulk interception was not in reading the actual content of communications, but in the information associated with those communications.”

If it is right that little or no filtering is applied to collection of related communications data (or secondary data as it is known in the IP Act), then the overall end to end process would look something like this (the diagram draws on Snowden documents published by The Intercept as well as the sources already mentioned):

Returning to the BBW judgment, the Court’s concerns related to intercepted ‘communications’ and ‘material’:

“the lack of oversight of the entire selection process, including the selection of bearers for interception, the selectors and search criteria for filtering intercepted communications, and the selection of material for examination by an analyst…”

There is no obvious reason to limit those observations to content. Elsewhere in the judgment the Court was “not persuaded that the acquisition of related communications data is necessarily less intrusive than the acquisition of content” and went on:

“The related communications data … could reveal the identities and geographic location of the sender and recipient and the equipment through which the communication was transmitted. In bulk, the degree of intrusion is magnified, since the patterns that will emerge could be capable of painting an intimate picture of a person through the mapping of social networks, location tracking, Internet browsing tracking, mapping of communication patterns, and insight into who a person interacted with…”.

The Court went on to make specific criticisms of RIPA’s lack of restrictions on the use of related communications data, as discussed below.

What does the Court’s finding on end to end oversight mean for the IP Act? The Act introduces independent approval of warrants by Judicial Commissioners, but does it create the robust oversight of the end to end process, particularly of selectors and search criteria, that the Strasbourg Court requires?

The March 2015 ISC Report recommended that the oversight body be given express authority to review the selection of bearers, the application of simple selectors and initial search criteria, and the complex searches which determine which communications are read. David Anderson Q.C.'s (now Lord Anderson) Bulk Powers Review records (para 2.26(g)) an assurance given by the Home Office that that authority is inherent in clauses 205 and 211 of the Bill (now sections 229 and 235 of the IP Act).

Beyond that, under the IP Act the Judicial Commissioners have to consider at the warrant approval stage the necessity and proportionality of conduct authorised by a bulk warrant. Arguably that includes all four stages identified by the Strasbourg Court (see my submission to IPCO earlier this year). If that is right, the RIPA gap may have been partially filled.

However, the IP Act does not specify in terms that selectors and search criteria have to be reviewed. Moreover, focusing on those particular techniques already seems faintly old-fashioned. The Bulk Powers Review reveals the extent to which more sophisticated analytical techniques such as anomaly detection and pattern analysis are brought to bear on intercepted material, particularly communications data. Robust end to end oversight ought to cover these techniques as well as use of selectors and automated queries.  

The remainder of the gap could perhaps be filled by an explanation of how closely the Judicial Commissioners oversee the various selection, searching and other analytical processes.

Filling this gap may not necessarily require amendment of the IP Act, although it would be preferable if it were set out in black and white. It could perhaps be filled by an IPCO advisory notice: first as to its understanding of the relevant requirements of the Act; and second explaining how that translates into practical oversight, as part of bulk warrant approval or otherwise, of the end to end stages involved in bulk interception (and indeed the other bulk powers).

Related Communications Data/Secondary Data

The diagram above shows how communications data can be obtained from bulk interception. Under RIPA this was known as Related Communications Data. In the IP Act it is known as Secondary Data. Unlike RIPA, the IP Act specifies a category of bulk warrant that extracts secondary data alone (without content) from bearers.  However, the IP Act definition of secondary data also permits some items of content to be extracted from communications and treated as communications data.

Like RIPA, the IP Act contains few specific restrictions on the use to which secondary data can be put. It may be examined for a reason falling within the overall statutory purposes and subject to necessity and proportionality. The IP Act adds the requirement that the reason be within the operational purposes (which can be broad) specified in the bulk warrant. As with RIPA, the restriction that the purpose of the bulk interception must be overseas-related does not apply at the examination stage. Like RIPA, there is a requirement to obtain specific authority (a targeted examination warrant, in the case of the IP Act) to select for examination the communications of someone known to be within the British Islands. But like RIPA this applies only to content, not to secondary data.

RIPA’s lack of restriction on examining related communications data was challenged in the Investigatory Powers Tribunal. The government argued (and did so again in the Strasbourg proceedings) that this was necessary in order to be able to determine whether a target was within the British Islands, and hence whether it was necessary to apply for specific authority from the Secretary of State to examine the content of the target’s communications.

The IPT accepted this argument, holding that the difference in the restrictions was justified and proportionate by virtue of the need to be able to determine whether a target was within the British Islands. It rejected as “an impossibly complicated or convoluted course” the suggestion that RIPA could have provided a specific exception to provide for the use of metadata for that purpose.

That, however, left open the question of all the other uses to which metadata could be put. If the Snowden documents referred to above are any guide, those uses are manifold.  Bulk intercepted metadata would hardly be of primary value to GCHQ, as described by the ISC, if its use were restricted to ascertaining whether a target was within or outside the British Islands.

The Strasbourg Court identified this gap in RIPA and held that the absence of restrictions on examining related communications data was a ground on which RIPA violated the ECHR.

The Court accepted that related communications data should be capable of being used in order to ascertain whether a target was within or outside the British Islands. It also accepted that that should not be the only use to which it could be put, since that would impose a stricter regime than for content.

But it found that there should nevertheless be “sufficient safeguards in place to ensure that the exemption of related communications data from the requirements of section 16 of RIPA is limited to the extent necessary to determine whether an individual is, for the time being, in the British Islands.”

Transposed to the IP Act, this could require a structure for selecting secondary data for examination along the following lines:
  • Selection permitted in order to determine whether an individual is, for the time being, in the British Islands.
  • Targeted examination warrant required if (a) any criteria used for the selection of the secondary data for examination are referable to an individual known to be in the British Islands, and (b) the purpose of using those criteria is to identify secondary data or content relating to communications sent by, or intended for, that individual.
  • Otherwise: selection of secondary data permitted (but subject to the robust end to end oversight requirements discussed above).

Although the Court speaks only of sufficient safeguards, it is difficult to see how this could be implemented without amendment of the IP Act.

Journalistic privilege

The Court found RIPA lacking in two areas: bulk interception (for both content and related communications data) and ordinary communications data acquisition. The task of determining to what extent the IP Act remedies the deficiencies is complex. However, in the light of the comparisons below it seems likely that at least some amendments to the legislation will be necessary.

Bulk interception
For bulk interception, the Court was particularly concerned that there were no requirements either:
  • circumscribing the intelligence services’ power to search for confidential journalistic or other material (for example, by using a journalist’s email address as a selector),
  • requiring analysts, in selecting material for examination, to give any particular consideration to whether such material is or may be involved.

Consequently, the Court said, it would appear that analysts could search and examine without restriction both the content and the related communications data of those intercepted communications.

For targeted examination warrants the IP Act itself contain some safeguards relating to retention and disclosure of material where the purpose, or one of the purposes, of the warrant is to authorise the selection for examination of journalistic material which the intercepting authority believes is confidential journalistic material. Similar provisions apply if the purpose, or one of the purposes, of the warrant is to identify or confirm a source of journalistic information.

Where a targeted examination warrant is unnecessary the Interception Code of Practice provides for corresponding authorisations and safeguards by a senior official outside the intercepting agency.

Where a communication intercepted under a bulk warrant is retained following examination and it contains confidential journalistic material, the Investigatory Powers Commissioner must be informed as soon as reasonably practicable.

Unlike RIPA, S.2 of the IP Act contains a general provision requiring public authorities to have regard to the particular sensitivity of any information, including confidential journalistic material and the identity of a journalist’s source.

Whilst these provisions are an improvement on RIPA, it will be open to debate whether they are sufficient, particularly since the specific safeguards relate to arrangements for handling, retention, use and destruction of the communications rather than to search and selection.

Bulk communications data acquisition
The IP Act introduces a new bulk communications data acquisition warrant to replace S.94 of the Telecommunications Act 1994. S.94 was not considered in the BBW case.  The IP Act bulk power contains no provisions specifically protecting journalistic privilege. The Code of Practice expands on the general provisions in S.2 of the Act. 

Ordinary communications data acquisition
The RIPA Code of Practice required an application to a judge under PACE 1984 where the purpose of the application was to determine a source. The Strasbourg court criticised this on the basis that it did not apply in every case where there was a request for the communications data of a journalist, or where such collateral intrusion was likely.

The IP Act contains a specific provision requiring a public authority to seek the approval of the Investigatory Powers Commissioner to obtain communications data for the purpose of identifying or confirming a source of journalistic information. This provision appears to suffer the same narrowness of scope criticised by the Strasbourg Court.

Friday 19 October 2018

Take care with that social media duty of care

Should social media platforms be subject to a statutory duty of care, akin to occupiers’ liability or health and safety, with the aim of protecting against online harms? In a series of blogposts and evidence to the House of Lords Communications Committee William Perrin and Professor Lorna Woods suggest that the answer should be yes. They say in their evidence:

“A common comparison is that social media services are “like a publisher”. In our view the main analogy for social networks lies outside the digital realm. When considering harm reduction, social media networks should be seen as a public place – like an office, bar, or theme park. Hundreds of millions of people go to social networks owned by companies to do a vast range of different things. In our view, they should be protected from harm when they do so. [25]
The law has proven very good at this type of protection in the physical realm. Workspaces, public spaces, even houses, in the UK owned or supplied by companies have to be safe for the people who use them. The law imposes a “duty of care” on the owners of those spaces. The company must take reasonable measures to prevent harm.” [26]
The aim of this post is to explore the comparability of offline duties of care, focusing on the duties of care owed by occupiers of physical public spaces to their visitors.
From the earliest days of the internet people have looked to offline analogies in the search for legal regimes suitable for the online world. Book and print distributors, with their intermediary role in disseminating information, were an obvious model for discussion forums and bulletin boards, the forerunners of today’s social media platforms.  The liability of distributors for the content of the materials they carried was limited. The EU Electronic Commerce Directive applied a broadly similar liability model to a wide range of online hosting activities including on social media platforms.
The principle of offline and online equivalence still holds sway: whilst no offline analogies are precise, as far as possible the same legal regime should apply to comparable online and offline activities.
A print distributor is a good analogy for a social media platform because they both involve dissemination of information. However, the analogy is not perfect. Distribution lacks the element of direct personal interaction between two principals who may come into conflict, a feature that is common to both social media and a physical public place. The relationship between a social media platform and its users has some parallels with that between the occupier of a physical space and its visitors.
A physical public place is not, however, a perfect analogy. Duties of care owed by physical occupiers relate to what is done, not said, on their premises. They concern personal injury and damage to property. Such safety-related duties of care are thus about those aspects of physical public spaces that are less like online platforms.
That is not to say that there is no overlap. Some harms that result from online interaction can be fairly described as safety-related. Grooming is an obvious example. However that is not the case for all kinds of harm. It may be tempting to label a broad spectrum of online behaviour as raising issues of online safety, as the government has tended to do in its Internet Safety Strategy Green Paper. However, that conceals rather than addresses the question of what constitutes a safety-related harm.
As a historical note, when a statutory duty of care for occupiers' liability was introduced in 1957 the objective was to abolish the fine distinctions that the common law had drawn between different kinds of visitor. The legislation did not expand the kinds of harm to which the duty applied. Those remained, as they do today, limited to safety-related harms: personal injury and damage to property.
Other closer kinds of relationship, such as employer and employee, may give rise to a duty of care in respect of broader kinds of harm. So under the Health and Safety Act 1974 an employer’s duty in respect of employees is in relation to their health, safety and welfare, whereas its duty in respect of other persons is limited to their health and safety. The employer-employee relationship does not correspond to the occupier-visitor relationship that characterises the analogy between physical world public spaces and online platforms.
Non-safety related harms are generally addressed by subject-specific legislation which takes account of the nature of the wrongdoing and the harm in question.
To the extent that common law duties of care do apply to non-safety related harms, they arise out of relationships that are not analogous to a site and visitor. Thus if a person assumes responsibility to someone who relies on their incorrect statement, they may owe a duty of care in respect of financial loss suffered as a result. That is a duty owed by the maker of the statement to the person who relies upon it. There is no duty on the occupier of a physical space to prevent visitors to the site making incorrect statements to each other.
Many harms that may be encountered online (putting aside the question of whether some are properly described as harms at all) are of a different nature from the safety-related dangers in respect of which occupier-related duties of care are imposed in a physical public space.
We shall also see that unlike dangers commonly encountered in a physical place, such as tripping on a dangerous path, the kind of online harms that it is suggested should be within the ambit of a duty of care typically arise out of how users behave to each other rather than from interaction between a visitor and the occupier itself.
Duties of care arising out of occupation of a physical public place
The “operator” of a physical world place such as an office, bar, or theme park is subject to legal duties of care. In its capacity as occupier, by statute it automatically owes a duty of care to visitors in relation to the safety of the premises. It may also owe visitors a common law duty of care in some situations not covered by the statutory duty of care. In either case the duty of care relates to danger, in the sense of risk of personal injury or damage to property.
The Perrin/Woods evidence describes the principle of a duty of care:
“The idea of a “duty of care” is straightforward in principle. A person (including companies) under a duty of care must take care in relation to a particular activity as it affects particular people or things. If that person does not take care and someone comes to harm as a result then there are legal consequences. [24] …
In our view the generality and simplicity of a duty of care works well for the breadth, complexity and rapid development of social media services, where writing detailed rules in law is impossible. By taking a similar approach to corporate owned public spaces, workplaces, products etc in the physical world, harm can be reduced in social networks.” [28]
The general idea of a duty of care can be articulated relatively simply. However that does not mean that a duty of care always exists, or that any given duty of care is general in substance.
In many situations a duty of care will not exist. It may exist in relation to some kinds of harm but not others, in relation to some people but not others, or in relation to some kinds of conduct but not others.
Occupiers’ liability is a duty of care defined by statute. As such the initial common law step of deciding whether a duty of care exists is removed. The statute lays down that a duty of care is owed to visitors in respect of dangers due to the state of the premises or to things done or omitted to be done on them.
“Things done or omitted to be done” on the premises refers to kinds of activities that relate to occupancy and create a risk of personal injury or damage to property – for instance allowing speedboats on a lake used by swimmers, or operating a car park. The statutory duty does not extend to every kind of activity that people engage in on the premises.
The content of the statutory duty is to take reasonable care to see that the visitor will be reasonably safe in using the premises for the purposes for which he is invited or permitted by the occupier to be there. For some kinds of danger the duty of care may not require the occupier to take any steps at all. For instance, there is no duty to warn of obvious risks.
As to the common law, the courts some time ago abandoned the search for a universal touchstone by which to determine whether a duty of care exists. When the courts extend categories of duty of care they do so incrementally, with close regard to situations in which duties of care already exist. They take into account proximity of relationship between the persons by whom and to whom the duty is said to be owed, foreseeability of harm and whether it is fair, just and reasonable to impose a duty of care.
That approach brings into play the scope and content of the obligation said to be imposed: a duty of care to do what, and in respect of what kinds of harm? In Caparo v Dickman Lord Bridge cautioned against discussing duties of care in abstract terms divorced from factual context:
"It is never sufficient to ask simply whether A owes B a duty of care. It always necessary to determine the scope of the duty by reference to the kind of damage from which A must take care to save B harmless."
That is an especially pertinent consideration if the kinds of harm for which an online duty of care is advocated differ from those in respect of which offline duties of care exist. As with the statutory duty, common law duties of care arising from occupation of physical premises concern safety-related harms: personal injury and damage to property.
Outside the field of occupiers’ liability, a particularly close relationship with the potential victim, for instance employer and employee or school and pupil, may give rise to a more extensive duty of care.
A duty of care may sometimes be owed because of a particular relationship between the defendant and the perpetrator (as opposed to the victim). That was the basis on which a Borstal school was held to owe a duty of care to a member of the public whose property was damaged by an escaped inmate.
Vicarious liability and non-delegable duties of care can in some circumstances render a person liable for someone else's breach of duty.
However, none of these situations corresponds to the relationship between occupiers of public spaces and their visitors.
A duty of care to prevent one visitor harming another
An occupier’s duty of care may be described in broad terms as a duty to provide a reasonably safe environment for visitors.  However that bears closer examination.
The paradigm case of a visitor tripping over a dangerous paving stone or injured when using a badly maintained theme park ride does not translate well into the online environment.  The kind of duty of care that would be most relevant to a social media platform is different: a duty to take steps to prevent, or reduce the risk of, one site visitor harming another.
While that kind of duty is not unheard of in respect of physical public places, it has been applied in very specific circumstances: for instance a bar serving alcohol, a football club in respect of behaviour of rival fans or a golf club in respect of mishit balls.  These related to specific activities that created the danger in question. The duties apply to safety properly so called - risk of personal injury inflicted by one visitor on another – but not to what visitors say to each other.  
This limited kind of duty of care may be compared with the proposal in the Perrin/Woods evidence. It suggests that what is, in substance, a universal duty of care should apply to large social media platforms (over 1,000,000 users/members/viewers in the UK) in relation to:
"a)       Harmful threats – statement of an intention to cause pain, injury, damage or other hostile action such as intimidation. Psychological harassment, threats of a sexual nature, threats to kill, racial or religious threats known as hate crime. Hostility or prejudice based on a person’s race, religion, sexual orientation, disability or transgender identity. We would extend the understanding of “hate” to include misogyny.
b)      Economic harm – financial misconduct, intellectual property abuse,
c)       Harms to national security – violent extremism, terrorism, state sponsored cyber warfare
d)      Emotional harm – preventing emotional harm suffered by users such that it does not build up to the criminal threshold of a recognised psychiatric injury.  For instance through aggregated abuse of one person by many others in a way that would not happen in the physical world ([…] on emotional harm below a criminal threshold). This includes harm to vulnerable people – in respect of suicide, anorexia, mental illness etc.
e)       Harm to young people – bullying, aggression, hate, sexual harassment and communications, exposure to harmful or disturbing content, grooming, child abuse ([…])
f)         Harms to justice and democracy – prevent intimidation of people taking part in the political process beyond robust debate, protecting the criminal and trial process ([…])"
These go far wider than the safety-related harms that underpin the duties of care to which the occupiers of physical world public spaces are subject.
Perrin and Woods have recognised this elsewhere, suggesting that the common law duty of care would be "insufficient" in "the majority of cases in relation to social media due, in part, to the jurisprudential approach to non-physical injury”.  However, this assumes the conclusion that an online duty of care ought to apply to broader kinds of harm. Whether a particular kind of harm is appropriate for a duty of care-based approach would be a significant question.
Offline duties of care applicable to the proprietors of physical world public spaces do not correspond to a universal duty of care to prevent broadly defined notions of harm resulting from the behaviour of visitors to each other.
It may be said that the kind of harm that is foreseeable on a social media platform is different from that which is foreseeable in a bar, a football ground or a theme park. On that basis it may be argued that a duty of care should apply in respect of a wider range of harms. However, that is an argument from difference, not similarity. The duties of care applicable to an occupier’s liability to visitors in a physical world space, both statutory and common law, are limited to safety-related harms. That is a long standing and deliberate policy.
The purpose of a duty of care
The Perrin/Woods evidence describes the purpose of duties of care in terms that they internalise external costs ([14], [18]) and make companies invest in safety by taking reasonable measures to prevent harm ([26]). Harms represent “external costs generated by production of the social media providers’ products” ([14]).
However, articulating the purpose of duties of care does not provide an answer to how we should determine what should be regarded as harmful external costs in the first place, which kind of harms should and should not be the subject of a duty of care and the extent (if any) to which a duty of care should oblige an operator to take steps to prevent actions of third party users.
There is also an assumption that consequences of user actions are external costs generated by the platform's products, rather than costs generated by users themselves. That is something like equating a locomotive emitting sparks with what passengers say to each other in the carriages.
Offline duties of care do not attempt to internalise all external costs.  Some might say that the offline regime should go further. However, an analogy with the offline duty of care regime has to start from what is, rather than from what is not.
Examples of physical world duties of care
It can be seen from the above that for the purpose of analogy the two most relevant aspects of duties of care in physical public spaces are: (1) the extent of any duty owed by the occupier in respect of behaviour by visitors towards each other and (2) the types of harm in respect of which such a duty of care applies.
Duties owed to visitors in respect of behaviour to each other
One physical world example mentioned in the Perrin/Woods paper is the bar. The common law duty of care owed by a members' bar to its visitors was considered by the Court of Appeal in Everett v Comojo.  This was a case of personal injury: a guest stabbing two other guests several times, leading to a claim that the owners of the club should have taken steps to prevent the perpetrator committing the assault.  On the facts the club was held not to have breached any duty of care that it owed. The court held that it did owe a duty of care analogous to statutory occupiers' liability. The content of the duty of care was limited. The bar was under no obligation to search guests on entry for offensive weapons. There had been no prior indication that the guest was about to turn violent. While a waitress had become concerned, and went to talk to the manager, she could not have been criticised if she had done nothing.
The judge suggested that a club with a history of people bringing in offensive weapons might have a duty to search guests at the door. In a club with a history of outbreaks of violence the duty might be to have staff on hand to control the outbreak. Some clubs might have to have security personnel permanently present.   In a club with no history the duty might only be to train staff to look out for trouble and to alert security personnel.
This variable duty of care existed in respect of personal injury in the specific situation where the serving of alcohol created a particular risk of loss of control and violence by patrons.
We can also consider the sports ground. In Cunningham v Reading Football Club Ltd the football club was found to have breached its statutory duty of care to a policeman who was injured when visiting fans broke pieces of concrete off the “appallingly dilapidated” terraces and used them as missiles. The club was found to have been well aware that the visiting crowd was very likely indeed to contain a violent element. Similar incidents involving lumps of concrete broken off from the terracing had occurred at a match played at the same ground less than four months earlier and no steps had been taken in the meantime to make that more difficult.
In a Scottish case a golf club was held liable for injuries suffered by a golfer struck by a golf ball played by a fellow golfer, on the basis of lack of warning signs in an area at risk from a mishit ball.
The Perrin/Woods evidence cites the example of a theme park. The occupier of a park owes a duty to its visitors to take reasonable care to provide reasonably safe premises – safe in the sense of danger of personal injury or damage to property. It owes no duty to check what visitors are saying to each other while strolling in the grounds.
It can be seen that what is required by a duty of care may vary with the factual circumstances. The Perrin/Woods evidence emphasises the flexibility of a duty of care according to the degree of risk, although it advocates putting that assessment in the hands of a regulator (that is another debate).
However, we should not lose sight of the fact that in the offline world the variable content of duties of care is contained within boundaries that determine whether a duty of care exists at all and in respect of what kinds of harm.

The law does not impose a universally applicable duty of care to take steps to prevent or reduce any kind of foreseeable harm that visitors may cause to each other; certainly not when the harm is said to have been inflicted by words rather than by a knife, a flying lump of concrete or an errant golf ball.
Types of harm
That brings us to the kind of harm that an online duty of care might seek to prevent.
A significant difference from offline physical spaces is that internet platforms are based on speech. That is why distribution of print information has served well as an analogy.
Where activities like grooming, harassment and intimidation are concerned, it is true that the fact that words may be the means by which they are carried out is of no greater significance online than it is offline. Saying may cross the line into doing. And an online conversation can lead to a real world encounter or take place in the context of a real world relationship outside the platform.
Nevertheless, offensive words are not akin to a knife in the ribs or a lump of concrete. The objectively ascertainable personal injury caused by an assault bears no relation to a human evaluating and reacting to what people say and write.
Words and images may cause distress. It may be said that they can cause psychiatric harm. But even in the two-way scenario of one person injuring another, there is argument over the proper boundaries of recoverable psychiatric damage by those affected, directly or indirectly. Only in the case of intentional infliction of severe distress can pure psychiatric damage be recovered.
The difficulties are compounded in the three-way scenario: a duty of care on a platform to prevent or reduce the risk of one visitor using words that cause psychiatric damage or emotional harm to another visitor. Such a duty involves predicting the potential psychological effect of words on unknown persons. The obligation would be of a quite different kind from the duty on the occupier of a football ground to take care to repair dilapidated terracing, with a known risk of personal injury by fans prising up lumps of concrete and using them as missiles.
It might be countered that the platform would have only to consider whether the risk of psychological or emotional harm exceeded a threshold. But the lower the threshold, the greater the likelihood of collateral damage by suppression of legitimate speech. A regime intended to internalise a negative externality then propagates a different negative externality created by the duty of care regime itself.  This is an inevitable risk of extrapolating safety-related duties of care to speech-related harms.
Some of the difficulties in relation to psychiatric harm and freedom of speech are illustrated by the UK Supreme Court case of Rhodes v OPO. This claim was brought under the rule in Wilkinson v Downton, which by way of exception from the general rules of negligence permits recovery for deliberately inflicted severe distress resulting in psychiatric illness. The case was about whether the author of an autobiography should be prevented from publishing by an interlocutory injunction. The claim was that, if his child were to read it, the author would be intentionally causing distress to the child as a result of the blunt and graphic descriptions of the abuse that the author had himself suffered as a child.  The Supreme Court allowed the publication to proceed.
The Court of Appeal had held that there could be no justification for the publication if it was likely to cause psychiatric harm to the child. The Supreme Court disagreed, commenting that:
“that approach excluded consideration of the wider question of justification based on the legitimate interest of the defendant in telling his story to the world at large in the way in which he wishes to tell it, and the corresponding interest of the public in hearing his story. … ” [75]
It went on:
“It is difficult to envisage any circumstances in which speech which is not deceptive, threatening or possibly abusive, could give rise to liability in tort for wilful infringement of another’s right to personal safety. The right to report the truth is justification in itself. That is not to say that the right of disclosure is absolute … . But there is no general law prohibiting the publication of facts which will cause distress to another, even if that is the person’s intention.” [77]
This passage aptly illustrates the caution that has to be exercised in applying physical world concepts of harm, injury and safety to communication and speech, even before considering the further step of imposing a duty of care on a platform to take steps to reduce the risk of their occurrence as between third parties, or the yet further step of appointing a regulator to superintend the platform’s systems for doing so.
The Supreme Court went on to criticise the injunction granted by the Court of Appeal, which had permitted publication of the book only in a bowdlerised version. It emphasised the right of the author to communicate his experiences using brutal language:
“His writing contains dark descriptions of emotional hell, self-hatred and rage, as can be seen in the extracts which we have set out. The reader gains an insight into his pain but also his resilience and achievements. To lighten the darkness would reduce its effect. The court has taken editorial control over the manner in which the appellant’s story is expressed. A right to convey information to the public carries with it a right to choose the language in which it is expressed in order to convey the information most effectively.” [78]
Prior restraint
The Supreme Court in Rhodes emphasised not only the right of the author to tell the world about his experience, but the “corresponding public interest in others being able to listen to his life story in all its searing detail”.
It may be thought that there is no issue with requiring platforms to remove content, so long as the person who posted it has access to a put back and appeal procedure.
That, however, addresses only one side of the freedom of speech coin.  It does nothing to address the corresponding interest of others in being able to read it, a right which they will never be able to exercise if a platform has been required to prevent an item seeing the light of day and the originator then does nothing to challenge the decision.
We derive from the right of freedom of speech a set of principles that collide with the kind of actions that duties of care might require, such as monitoring and pre-emptive removal of content. The precautionary principle may have a place in preventing harm such as pollution, but when applied to speech it translates directly into prior restraint. The presumption against prior restraint refers not just to pre-publication censorship, but the principle that speech should stay available to the public until the merits of a complaint have been adjudicated by a legally competent independent tribunal.  The fact that we are dealing with the internet does not negate the value of procedural protections for speech.
Not every duty of care involves monitoring and removal of content. Not all use of words amounts to pure speech. Nevertheless, we are in dangerous territory when we seek to apply preventive non-specific duties of care to users' communications.
Duties of care and the Electronic Commerce Directive
Duties of care are relevant to the intermediary liability protections of the Electronic Commerce Directive. Article 15 prevents a general monitoring obligation being imposed on conduits, hosts or caches.  However Recital 48 says:
“This Directive does not affect the possibility for Member States of requiring service providers, who host information provided by recipients of their service, to apply duties of care, which can reasonably be expected from them and which are specified by national law, in order to detect and prevent certain types of illegal activities.”
This does not itself impose a duty of care on intermediaries. It simply leaves room for Member States to impose various kinds of duty of care so long as they do not contravene Article 15 or run counter to the liability protections in Article 12 to 14.
Article 15 again focuses attention on the question “A duty of care to do what?” A duty of care that required a user to have access to an emergency button would not breach Article 15. An obligation to screen user communications would do so.
This piece started by observing that no analogy is perfect. Although some overlap exists with the safety-related dangers (personal injury and damage to property) that form the subject matter of occupiers’ liability to visitors and of corresponding common law duties of care, many online harms are of other kinds. Moreover, it is significant that the duty of care would consist in preventing behaviour of one site visitor to another.
The analogy with public physical places suggests that caution is required in postulating duties of care that differ markedly from those, both statutory and common law, that arise from the offline occupier-visitor relationship.

Sunday 7 October 2018

A Lord Chamberlain for the internet? Thanks, but no thanks.

This summer marked the fiftieth anniversary of the Theatres Act 1968, the legislation that freed the theatres from the censorious hand of the Lord Chamberlain of Her Majesty’s Household. Thereafter theatres needed to concern themselves only with the general laws governing speech. In addition they were granted a public good defence to obscenity and immunity from common law offences against public morality.

The Theatres Act is celebrated as a landmark of enlightenment. Yet today we are on the verge of creating a Lord Chamberlain of the Internet. We won't call it that, of course. The Times, in its leader of 5 July 2018, came up with the faintly Orwellian "Ofnet". Speculation has recently renewed that the UK government is laying plans to create a social media regulator to tackle online harm. What form that might take, should it happen, we do not know. We will find out when the government produces a promised white paper.

When governments talk about regulating online platforms to prevent harm it takes no great leap to realise that we, the users, are the harm that they have in mind.

The statute book is full of legislation that restrains speech. Most, if not all, of this legislation applies online as well as offline. Some of it applies more strictly online than offline. These laws set boundaries: defamation, obscenity, intellectual property rights, terrorist content, revenge porn, harassment, incitement to racial and religious hatred and many others. Those boundaries represent a balance between freedom of speech and harm to others. It is for each of us to stay inside the boundaries, wherever they may be set. Within those boundaries we are free to say what we like, whatever someone in authority may think. Independent courts, applying principles, processes and presumptions designed to protect freedom of speech, adjudge alleged infractions according to clear, certain laws enacted by Parliament.

But much of the current discussion centres on something quite different: regulation by regulator. This model concentrates discretionary power in a state agency. In the UK the model is to a large extent the legacy of the 1980s Thatcher government, which started the OF trend by creating OFTEL (as it then was) to regulate the newly liberalised telecommunications market. A powerful regulator, operating flexibly within broadly stated policy goals, can be rule-maker, judge and enforcer all rolled into one.

That may be a long-established model for economic regulation of telecommunications competition, energy markets and the like. But when regulation by regulator trespasses into the territory of speech it takes on a different cast. Discretion, flexibility and nimbleness are vices, not virtues, where rules governing speech are concerned. The rule of law demands that a law governing speech be general in the sense that it applies to all, but precise about what it prohibits. Regulation by regulator is the converse: targeted at a specific group, but laying down only broadly stated goals that the regulator should seek to achieve.
As OFCOM puts it in its recent discussion paper ‘Addressing Harmful Online Content’: “What has worked in a broadcasting context is having a set of objectives laid down by Parliament in statute, underpinned by detailed regulatory guidance designed to evolve over time. Changes to the regulatory requirements are informed by public consultation.”

Where exactly the limits on freedom of speech should lie is a matter of intense, perpetual, debate. It is for Parliament to decide, after due consideration, whether to move the boundaries. It is anathema to both freedom of speech and the rule of law for Parliament to delegate to a regulator the power to set limits on individual speech.

It becomes worse when a document like the government’s Internet Safety Strategy Green Paper takes aim at subjective notions of social harm and unacceptability rather than strict legality and illegality according to the law. ‘Safety’ readily becomes an all-purpose banner under which to proceed against nebulous categories of speech which the government dislikes but cannot adequately define.

Also troubling is the frequently erected straw man that the internet is unregulated. This blurs the vital distinction between the general law and regulation by regulator. Participants in the debate are prone to debate regulation as if the general law did not exist.

Occasionally the difference is acknowledged, but not necessarily as a virtue. The OFCOM discussion paper observes that by contrast with broadcast services subject to long established regulation, some newer online services are ‘subject to little or no regulation beyond the general law’, as if the general law were a mere jumping-off point for further regulation rather than the democratically established standard for individual speech.

OFCOM goes on that this state of affairs was “not by design, but the outcome of an evolving system”. However, a deliberate decision was taken with the Communications Act 2003 to exclude OFCOM’s jurisdiction over internet content in favour of the general law alone.

Moving away from individual speech, the OFCOM paper characterises the fact that online newspapers are not subject to the impartiality requirements that apply to broadcasters as an inconsistency. Different, yes. Inconsistent, no.

Periodically since the 1990s the idea has surfaced that as a result of communications convergence broadcast regulation should, for consistency, apply to the internet. With the advent of video over broadband aspects of the internet started to bear a superficial resemblance to television. The pictures were moving, send for the TV regulator.

EU legislators have been especially prone to this non-sequitur. They are currently enacting a revision of the Audiovisual Media Services Directive that will require a regulator to exercise some supervisory powers over video sharing platforms.

However broadcast regulation, not the rule of general law, is the exception to the norm. It is one thing for a body like OFCOM to act as broadcast regulator, reflecting television’s historic roots in spectrum scarcity and Reithian paternalism. Even that regime is looking more and more anachronistic as TV becomes less and less TV-like. It is quite another to set up a regulator with power to affect individual speech. And it is no improvement if the task of the regulator is framed as setting rules about the platforms’ rules. The result is the same: discretionary control exercised by a state entity (however independent of the government it may be) over users’ speech, via rules that Parliament has not specifically legislated.

It is true, as the OFCOM discussion paper notes, that the line between broadcast and non-broadcast regulation means that the same content can be subject to different rules depending on how it is accessed. If that is thought to be anomalous, it is a small price to pay for keeping regulation by regulator out of areas in which it should not tread.

The House of Commons Media Culture and Sport Committee, in its July 2018 interim report on fake news, recommended that the government should use OFCOM’s broadcast regulation powers, “including rules relating to accuracy and impartiality”, as “a basis for setting standards for online content”. It is perhaps testament to the loss of perspective that the internet routinely engenders that a Parliamentary Committee could, in all seriousness, suggest that accuracy and impartiality rules should be applied to the posts and tweets of individual social media users.

Setting regulatory standards for content means imposing more restrictive rules than the general law. That is the regulator’s raison d’etre. But the notion that a stricter standard is a higher standard is problematic when applied to what we say. Consider the frequency with which environmental metaphors – toxic speech, polluted discourse – are now applied to online speech. For an environmental regulator, cleaner may well be better. The same is not true of speech. Offensive or controversial words are not akin to oil washed up on the seashore or chemicals discharged into a river. Objectively ascertainable physical damage caused by an oil spill bears no relation to a human being evaluating and reacting to the merits and demerits of what people say and write.

If we go further and transpose the environmental precautionary principle to speech we then have prior restraint – the opposite of the presumption against prior restraint that has long been regarded as a bulwark of freedom of expression. All the more surprising then that The Times, in its July Ofnet editorial, should complain of the internet that “by the time police and prosecutors are involved the damage has already been done”. That is an invitation to step in and exercise prior restraint.

As an aside, do the press really think that Ofnet would not before long be knocking on their doors to discuss their online editions? That is what happened when ATVOD tried to apply the Audiovisual Media Services Directive to online newspapers that incorporated video. Ironically it was The Times' sister paper, the Sun, that successfully challenged that attempt.

The OFCOM discussion paper observes that there are “reasons to be cautious over whether [the broadcast regime] could be exported wholesale to the internet”. Those reasons include that “expectations of protection or [sic] freedom of expression relating to conversations between individuals may be very different from those relating to content published by organisations”.

US district judge Dalzell said in 1996: “As the most participatory form of mass speech yet developed, the internet deserves the highest protection from governmental intrusion”. The opposite view now seems to be gaining ground: that we individuals are not to be trusted with the power of public speech, that it was a mistake ever to allow anyone to speak or write online without the moderating influence of an editor, and that by hook or by crook the internet genie must be stuffed back in its bottle.

Regulation by regulator, applied to speech, harks back to the bad old days of the Lord Chamberlain and theatres. In a free and open society we do not appoint a Lord Chamberlain of the Internet – even one appointed by Parliament rather than by the Queen - to tell us what we can and cannot say online, whether directly or via the proxy of online intermediaries. The boundaries are rightly set by general laws.

We can of course debate what those laws should be. We can argue about whether intermediary liability laws are appropriately set. We can consider what tortious duties of care apply to online intermediaries and whether those are correctly scoped. We can debate the dividing line between words and conduct. We can discuss the vexed question of an internet that is both reasonably safe for children and fit for grown-ups. We can think about better ways of enforcing laws and providing victims of unlawful behaviour with remedies. These are matters for public debate and for Parliament and the general law within the framework of fundamental rights. None of this requires regulation by regulator. Quite the opposite.

Nor is it appropriate to frame these matters of debate as (in the words of The Times) “an opportunity to impose the rule of law on a legal wilderness where civic instincts have been suspended in favour of unthinking libertarianism for too long”. People who use the internet, like people everywhere, are subject to the rule of law. The many UK internet users who have ended up before the courts, both civil and criminal, are testament to that. Disagreement with the substantive content of the law does not mean that there is a legal vacuum.

What we should be doing is take a hard look at what laws do and don’t apply online (the Law Commission is already looking at social media offences), revise those laws if need be and then look at how they can most appropriately be enforced.

This would involve looking at areas that it is tempting for a government to avoid, such as access to justice. How can we give people quick and easy access to independent tribunals with legitimacy to make decisions about online illegality? The current court system cannot provide that service at scale, and it is quintessentially a job for government rather than private actors. More controversially, is there room for greater use of powers such as ‘internet ASBOs’ to target the worst perpetrators of online illegality? The existing law contains these powers, but they seem to be little used.

It is hard not to think that an internet regulator would be a politically expedient means of avoiding hard questions about how the law should apply to people’s behaviour on the internet. Shifting the problem on to the desk of an Ofnet might look like a convenient solution. It would certainly enable a government to proclaim to the electorate that it had done something about the internet. But that would cast aside many years of principled recognition that individual speech should be governed by the rule of law, not the hand of a regulator.

If we want safety, we should look to the general law to keep us safe. Safe from the unlawful things that people do offline and online. And safe from a Lord Chamberlain of the Internet.