Friday, 19 October 2018

Take care with that social media duty of care

Should social media platforms be subject to a statutory duty of care, akin to occupiers’ liability or health and safety, with the aim of protecting against online harms? In a series of blogposts and evidence to the House of Lords Communications Committee William Perrin and Professor Lorna Woods suggest that the answer should be yes. They say in their evidence:

“A common comparison is that social media services are “like a publisher”. In our view the main analogy for social networks lies outside the digital realm. When considering harm reduction, social media networks should be seen as a public place – like an office, bar, or theme park. Hundreds of millions of people go to social networks owned by companies to do a vast range of different things. In our view, they should be protected from harm when they do so. [25]
The law has proven very good at this type of protection in the physical realm. Workspaces, public spaces, even houses, in the UK owned or supplied by companies have to be safe for the people who use them. The law imposes a “duty of care” on the owners of those spaces. The company must take reasonable measures to prevent harm.” [26]
The aim of this post is to explore the comparability of offline duties of care, focusing on the duties of care owed by occupiers of physical public spaces to their visitors.
From the earliest days of the internet people have looked to offline analogies in the search for legal regimes suitable for the online world. Book and print distributors, with their intermediary role in disseminating information, were an obvious model for discussion forums and bulletin boards, the forerunners of today’s social media platforms.  The liability of distributors for the content of the materials they carried was limited. The EU Electronic Commerce Directive applied a broadly similar liability model to a wide range of online hosting activities including on social media platforms.
The principle of offline and online equivalence still holds sway: whilst no offline analogies are precise, as far as possible the same legal regime should apply to comparable online and offline activities.
A print distributor is a good analogy for a social media platform because they both involve dissemination of information. However, the analogy is not perfect. Distribution lacks the element of direct personal interaction between two principals who may come into conflict, a feature that is common to both social media and a physical public place. The relationship between a social media platform and its users has some parallels with that between the occupier of a physical space and its visitors.
A physical public place is not, however, a perfect analogy. Duties of care owed by physical occupiers relate to what is done, not said, on their premises. They concern personal injury and damage to property. Such safety-related duties of care are thus about those aspects of physical public spaces that are less like online platforms.
That is not to say that there is no overlap. Some harms that result from online interaction can be fairly described as safety-related. Grooming is an obvious example. However that is not the case for all kinds of harm. It may be tempting to label a broad spectrum of online behaviour as raising issues of online safety, as the government has tended to do in its Internet Safety Strategy Green Paper. However, that conceals rather than addresses the question of what constitutes a safety-related harm.
As a historical note, when a statutory duty of care for occupiers' liability was introduced in 1957 the objective was to abolish the fine distinctions that the common law had drawn between different kinds of visitor. The legislation did not expand the kinds of harm to which the duty applied. Those remained, as they do today, limited to safety-related harms: personal injury and damage to property.
Other closer kinds of relationship, such as employer and employee, may give rise to a duty of care in respect of broader kinds of harm. So under the Health and Safety Act 1974 an employer’s duty in respect of employees is in relation to their health, safety and welfare, whereas its duty in respect of other persons is limited to their health and safety. The employer-employee relationship does not correspond to the occupier-visitor relationship that characterises the analogy between physical world public spaces and online platforms.
Non-safety related harms are generally addressed by subject-specific legislation which takes account of the nature of the wrongdoing and the harm in question.
To the extent that common law duties of care do apply to non-safety related harms, they arise out of relationships that are not analogous to a site and visitor. Thus if a person assumes responsibility to someone who relies on their incorrect statement, they may owe a duty of care in respect of financial loss suffered as a result. That is a duty owed by the maker of the statement to the person who relies upon it. There is no duty on the occupier of a physical space to prevent visitors to the site making incorrect statements to each other.
Many harms that may be encountered online (putting aside the question of whether some are properly described as harms at all) are of a different nature from the safety-related dangers in respect of which occupier-related duties of care are imposed in a physical public space.
We shall also see that unlike dangers commonly encountered in a physical place, such as tripping on a dangerous path, the kind of online harms that it is suggested should be within the ambit of a duty of care typically arise out of how users behave to each other rather than from interaction between a visitor and the occupier itself.
Duties of care arising out of occupation of a physical public place
The “operator” of a physical world place such as an office, bar, or theme park is subject to legal duties of care. In its capacity as occupier, by statute it automatically owes a duty of care to visitors in relation to the safety of the premises. It may also owe visitors a common law duty of care in some situations not covered by the statutory duty of care. In either case the duty of care relates to danger, in the sense of risk of personal injury or damage to property.
The Perrin/Woods evidence describes the principle of a duty of care:
“The idea of a “duty of care” is straightforward in principle. A person (including companies) under a duty of care must take care in relation to a particular activity as it affects particular people or things. If that person does not take care and someone comes to harm as a result then there are legal consequences. [24] …
In our view the generality and simplicity of a duty of care works well for the breadth, complexity and rapid development of social media services, where writing detailed rules in law is impossible. By taking a similar approach to corporate owned public spaces, workplaces, products etc in the physical world, harm can be reduced in social networks.” [28]
The general idea of a duty of care can be articulated relatively simply. However that does not mean that a duty of care always exists, or that any given duty of care is general in substance.
In many situations a duty of care will not exist. It may exist in relation to some kinds of harm but not others, in relation to some people but not others, or in relation to some kinds of conduct but not others.
Occupiers’ liability is a duty of care defined by statute. As such the initial common law step of deciding whether a duty of care exists is removed. The statute lays down that a duty of care is owed to visitors in respect of dangers due to the state of the premises or to things done or omitted to be done on them.
“Things done or omitted to be done” on the premises refers to kinds of activities that relate to occupancy and create a risk of personal injury or damage to property – for instance allowing speedboats on a lake used by swimmers, or operating a car park. The statutory duty does not extend to every kind of activity that people engage in on the premises.
The content of the statutory duty is to take reasonable care to see that the visitor will be reasonably safe in using the premises for the purposes for which he is invited or permitted by the occupier to be there. For some kinds of danger the duty of care may not require the occupier to take any steps at all. For instance, there is no duty to warn of obvious risks.
As to the common law, the courts some time ago abandoned the search for a universal touchstone by which to determine whether a duty of care exists. When the courts extend categories of duty of care they do so incrementally, with close regard to situations in which duties of care already exist. They take into account proximity of relationship between the persons by whom and to whom the duty is said to be owed, foreseeability of harm and whether it is fair, just and reasonable to impose a duty of care.
That approach brings into play the scope and content of the obligation said to be imposed: a duty of care to do what, and in respect of what kinds of harm? In Caparo v Dickman Lord Bridge cautioned against discussing duties of care in abstract terms divorced from factual context:
"It is never sufficient to ask simply whether A owes B a duty of care. It always necessary to determine the scope of the duty by reference to the kind of damage from which A must take care to save B harmless."
That is an especially pertinent consideration if the kinds of harm for which an online duty of care is advocated differ from those in respect of which offline duties of care exist. As with the statutory duty, common law duties of care arising from occupation of physical premises concern safety-related harms: personal injury and damage to property.
Outside the field of occupiers’ liability, a particularly close relationship with the potential victim, for instance employer and employee or school and pupil, may give rise to a more extensive duty of care.
A duty of care may sometimes be owed because of a particular relationship between the defendant and the perpetrator (as opposed to the victim). That was the basis on which a Borstal school was held to owe a duty of care to a member of the public whose property was damaged by an escaped inmate.
Vicarious liability and non-delegable duties of care can in some circumstances render a person liable for someone else's breach of duty.
However, none of these situations corresponds to the relationship between occupiers of public spaces and their visitors.
A duty of care to prevent one visitor harming another
An occupier’s duty of care may be described in broad terms as a duty to provide a reasonably safe environment for visitors.  However that bears closer examination.
The paradigm case of a visitor tripping over a dangerous paving stone or injured when using a badly maintained theme park ride does not translate well into the online environment.  The kind of duty of care that would be most relevant to a social media platform is different: a duty to take steps to prevent, or reduce the risk of, one site visitor harming another.
While that kind of duty is not unheard of in respect of physical public places, it has been applied in very specific circumstances: for instance a bar serving alcohol, a football club in respect of behaviour of rival fans or a golf club in respect of mishit balls.  These related to specific activities that created the danger in question. The duties apply to safety properly so called - risk of personal injury inflicted by one visitor on another – but not to what visitors say to each other.  
This limited kind of duty of care may be compared with the proposal in the Perrin/Woods evidence. It suggests that what is, in substance, a universal duty of care should apply to large social media platforms (over 1,000,000 users/members/viewers in the UK) in relation to:
"a)       Harmful threats – statement of an intention to cause pain, injury, damage or other hostile action such as intimidation. Psychological harassment, threats of a sexual nature, threats to kill, racial or religious threats known as hate crime. Hostility or prejudice based on a person’s race, religion, sexual orientation, disability or transgender identity. We would extend the understanding of “hate” to include misogyny.
b)      Economic harm – financial misconduct, intellectual property abuse,
c)       Harms to national security – violent extremism, terrorism, state sponsored cyber warfare
d)      Emotional harm – preventing emotional harm suffered by users such that it does not build up to the criminal threshold of a recognised psychiatric injury.  For instance through aggregated abuse of one person by many others in a way that would not happen in the physical world ([…] on emotional harm below a criminal threshold). This includes harm to vulnerable people – in respect of suicide, anorexia, mental illness etc.
e)       Harm to young people – bullying, aggression, hate, sexual harassment and communications, exposure to harmful or disturbing content, grooming, child abuse ([…])
f)         Harms to justice and democracy – prevent intimidation of people taking part in the political process beyond robust debate, protecting the criminal and trial process ([…])"
These go far wider than the safety-related harms that underpin the duties of care to which the occupiers of physical world public spaces are subject.
Perrin and Woods have recognised this elsewhere, suggesting that the common law duty of care would be "insufficient" in "the majority of cases in relation to social media due, in part, to the jurisprudential approach to non-physical injury”.  However, this assumes the conclusion that an online duty of care ought to apply to broader kinds of harm. Whether a particular kind of harm is appropriate for a duty of care-based approach would be a significant question.
Offline duties of care applicable to the proprietors of physical world public spaces do not correspond to a universal duty of care to prevent broadly defined notions of harm resulting from the behaviour of visitors to each other.
It may be said that the kind of harm that is foreseeable on a social media platform is different from that which is foreseeable in a bar, a football ground or a theme park. On that basis it may be argued that a duty of care should apply in respect of a wider range of harms. However, that is an argument from difference, not similarity. The duties of care applicable to an occupier’s liability to visitors in a physical world space, both statutory and common law, are limited to safety-related harms. That is a long standing and deliberate policy.
The purpose of a duty of care
The Perrin/Woods evidence describes the purpose of duties of care in terms that they internalise external costs ([14], [18]) and make companies invest in safety by taking reasonable measures to prevent harm ([26]). Harms represent “external costs generated by production of the social media providers’ products” ([14]).
However, articulating the purpose of duties of care does not provide an answer to how we should determine what should be regarded as harmful external costs in the first place, which kind of harms should and should not be the subject of a duty of care and the extent (if any) to which a duty of care should oblige an operator to take steps to prevent actions of third party users.
There is also an assumption that consequences of user actions are external costs generated by the platform's products, rather than costs generated by users themselves. That is something like equating a locomotive emitting sparks with what passengers say to each other in the carriages.
Offline duties of care do not attempt to internalise all external costs.  Some might say that the offline regime should go further. However, an analogy with the offline duty of care regime has to start from what is, rather than from what is not.
Examples of physical world duties of care
It can be seen from the above that for the purpose of analogy the two most relevant aspects of duties of care in physical public spaces are: (1) the extent of any duty owed by the occupier in respect of behaviour by visitors towards each other and (2) the types of harm in respect of which such a duty of care applies.
Duties owed to visitors in respect of behaviour to each other
One physical world example mentioned in the Perrin/Woods paper is the bar. The common law duty of care owed by a members' bar to its visitors was considered by the Court of Appeal in Everett v Comojo.  This was a case of personal injury: a guest stabbing two other guests several times, leading to a claim that the owners of the club should have taken steps to prevent the perpetrator committing the assault.  On the facts the club was held not to have breached any duty of care that it owed. The court held that it did owe a duty of care analogous to statutory occupiers' liability. The content of the duty of care was limited. The bar was under no obligation to search guests on entry for offensive weapons. There had been no prior indication that the guest was about to turn violent. While a waitress had become concerned, and went to talk to the manager, she could not have been criticised if she had done nothing.
The judge suggested that a club with a history of people bringing in offensive weapons might have a duty to search guests at the door. In a club with a history of outbreaks of violence the duty might be to have staff on hand to control the outbreak. Some clubs might have to have security personnel permanently present.   In a club with no history the duty might only be to train staff to look out for trouble and to alert security personnel.
This variable duty of care existed in respect of personal injury in the specific situation where the serving of alcohol created a particular risk of loss of control and violence by patrons.
We can also consider the sports ground. In Cunningham v Reading Football Club Ltd the football club was found to have breached its statutory duty of care to a policeman who was injured when visiting fans broke pieces of concrete off the “appallingly dilapidated” terraces and used them as missiles. The club was found to have been well aware that the visiting crowd was very likely indeed to contain a violent element. Similar incidents involving lumps of concrete broken off from the terracing had occurred at a match played at the same ground less than four months earlier and no steps had been taken in the meantime to make that more difficult.
In a Scottish case a golf club was held liable for injuries suffered by a golfer struck by a golf ball played by a fellow golfer, on the basis of lack of warning signs in an area at risk from a mishit ball.
The Perrin/Woods evidence cites the example of a theme park. The occupier of a park owes a duty to its visitors to take reasonable care to provide reasonably safe premises – safe in the sense of danger of personal injury or damage to property. It owes no duty to check what visitors are saying to each other while strolling in the grounds.
It can be seen that what is required by a duty of care may vary with the factual circumstances. The Perrin/Woods evidence emphasises the flexibility of a duty of care according to the degree of risk, although it advocates putting that assessment in the hands of a regulator (that is another debate).
However, we should not lose sight of the fact that in the offline world the variable content of duties of care is contained within boundaries that determine whether a duty of care exists at all and in respect of what kinds of harm.

The law does not impose a universally applicable duty of care to take steps to prevent or reduce any kind of foreseeable harm that visitors may cause to each other; certainly not when the harm is said to have been inflicted by words rather than by a knife, a flying lump of concrete or an errant golf ball.
Types of harm
That brings us to the kind of harm that an online duty of care might seek to prevent.
A significant difference from offline physical spaces is that internet platforms are based on speech. That is why distribution of print information has served well as an analogy.
Where activities like grooming, harassment and intimidation are concerned, it is true that the fact that words may be the means by which they are carried out is of no greater significance online than it is offline. Saying may cross the line into doing. And an online conversation can lead to a real world encounter or take place in the context of a real world relationship outside the platform.
Nevertheless, offensive words are not akin to a knife in the ribs or a lump of concrete. The objectively ascertainable personal injury caused by an assault bears no relation to a human evaluating and reacting to what people say and write.
Words and images may cause distress. It may be said that they can cause psychiatric harm. But even in the two-way scenario of one person injuring another, there is argument over the proper boundaries of recoverable psychiatric damage by those affected, directly or indirectly. Only in the case of intentional infliction of severe distress can pure psychiatric damage be recovered.
The difficulties are compounded in the three-way scenario: a duty of care on a platform to prevent or reduce the risk of one visitor using words that cause psychiatric damage or emotional harm to another visitor. Such a duty involves predicting the potential psychological effect of words on unknown persons. The obligation would be of a quite different kind from the duty on the occupier of a football ground to take care to repair dilapidated terracing, with a known risk of personal injury by fans prising up lumps of concrete and using them as missiles.
It might be countered that the platform would have only to consider whether the risk of psychological or emotional harm exceeded a threshold. But the lower the threshold, the greater the likelihood of collateral damage by suppression of legitimate speech. A regime intended to internalise a negative externality then propagates a different negative externality created by the duty of care of regime itself.  This is an inevitable risk of extrapolating safety-related duties of care to speech-related harms.
Some of the difficulties in relation to psychiatric harm and freedom of speech are illustrated by the UK Supreme Court case of Rhodes v OPO. This claim was brought under the rule in Wilkinson v Downton, which by way of exception from the general rules of negligence permits recovery for deliberately inflicted severe distress resulting in psychiatric illness. The case was about whether the author of an autobiography should be prevented from publishing by an interlocutory injunction. The claim was that, if his child were to read it, the author would be intentionally causing distress to the child as a result of the blunt and graphic descriptions of the abuse that the author had himself suffered as a child.  The Supreme Court allowed the publication to proceed.
The Court of Appeal had held that there could be no justification for the publication if it was likely to cause psychiatric harm to the child. The Supreme Court disagreed, commenting that:
“that approach excluded consideration of the wider question of justification based on the legitimate interest of the defendant in telling his story to the world at large in the way in which he wishes to tell it, and the corresponding interest of the public in hearing his story. … ” [75]
It went on:
“It is difficult to envisage any circumstances in which speech which is not deceptive, threatening or possibly abusive, could give rise to liability in tort for wilful infringement of another’s right to personal safety. The right to report the truth is justification in itself. That is not to say that the right of disclosure is absolute … . But there is no general law prohibiting the publication of facts which will cause distress to another, even if that is the person’s intention.” [77]
This passage aptly illustrates the caution that has to be exercised in applying physical world concepts of harm, injury and safety to communication and speech, even before considering the further step of imposing a duty of care on a platform to take steps to reduce the risk of their occurrence as between third parties, or the yet further step of appointing a regulator to superintend the platform’s systems for doing so.
The Supreme Court went on to criticise the injunction granted by the Court of Appeal, which had permitted publication of the book only in a bowdlerised version. It emphasised the right of the author to communicate his experiences using brutal language:
“His writing contains dark descriptions of emotional hell, self-hatred and rage, as can be seen in the extracts which we have set out. The reader gains an insight into his pain but also his resilience and achievements. To lighten the darkness would reduce its effect. The court has taken editorial control over the manner in which the appellant’s story is expressed. A right to convey information to the public carries with it a right to choose the language in which it is expressed in order to convey the information most effectively.” [78]
Prior restraint
The Supreme Court in Rhodes emphasised not only the right of the author to tell the world about his experience, but the “corresponding public interest in others being able to listen to his life story in all its searing detail”.
It may be thought that there is no issue with requiring platforms to remove content, so long as the person who posted it has access to a put back and appeal procedure.
That, however, addresses only one side of the freedom of speech coin.  It does nothing to address the corresponding interest of others in being able to read it, a right which they will never be able to exercise if a platform has been required to prevent an item seeing the light of day and the originator then does nothing to challenge the decision.
We derive from the right of freedom of speech a set of principles that collide with the kind of actions that duties of care might require, such as monitoring and pre-emptive removal of content. The precautionary principle may have a place in preventing harm such as pollution, but when applied to speech it translates directly into prior restraint. The presumption against prior restraint refers not just to pre-publication censorship, but the principle that speech should stay available to the public until the merits of a complaint have been adjudicated by a legally competent independent tribunal.  The fact that we are dealing with the internet does not negate the value of procedural protections for speech.
Not every duty of care involves monitoring and removal of content. Not all use of words amounts to pure speech. Nevertheless, we are in dangerous territory when we seek to apply preventive non-specific duties of care to users' communications.
Duties of care and the Electronic Commerce Directive
Duties of care are relevant to the intermediary liability protections of the Electronic Commerce Directive. Article 15 prevents a general monitoring obligation being imposed on conduits, hosts or caches.  However Recital 48 says:
“This Directive does not affect the possibility for Member States of requiring service providers, who host information provided by recipients of their service, to apply duties of care, which can reasonably be expected from them and which are specified by national law, in order to detect and prevent certain types of illegal activities.”
This does not itself impose a duty of care on intermediaries. It simply leaves room for Member States to impose various kinds of duty of care so long as they do not contravene Article 15 or run counter to the liability protections in Article 12 to 14.
Article 15 again focuses attention on the question “A duty of care to do what?” A duty of care that required a user to have access to an emergency button would not breach Article 15. An obligation to screen user communications would do so.
Conclusion
This piece started by observing that no analogy is perfect. Although some overlap exists with the safety-related dangers (personal injury and damage to property) that form the subject matter of occupiers’ liability to visitors and of corresponding common law duties of care, many online harms are of other kinds. Moreover, it is significant that the duty of care would consist in preventing behaviour of one site visitor to another.
The analogy with public physical places suggests that caution is required in postulating duties of care that differ markedly from those, both statutory and common law, that arise from the offline occupier-visitor relationship.

Sunday, 7 October 2018

A Lord Chamberlain for the internet? Thanks, but no thanks.

This summer marked the fiftieth anniversary of the Theatres Act 1968, the legislation that freed the theatres from the censorious hand of the Lord Chamberlain of Her Majesty’s Household. Thereafter theatres needed to concern themselves only with the general laws governing speech. In addition they were granted a public good defence to obscenity and immunity from common law offences against public morality.

The Theatres Act is celebrated as a landmark of enlightenment. Yet today we are on the verge of creating a Lord Chamberlain of the Internet. We won't call it that, of course. The Times, in its leader of 5 July 2018, came up with the faintly Orwellian "Ofnet". Speculation has recently renewed that the UK government is laying plans to create a social media regulator to tackle online harm. What form that might take, should it happen, we do not know. We will find out when the government produces a promised white paper.

When governments talk about regulating online platforms to prevent harm it takes no great leap to realise that we, the users, are the harm that they have in mind.

The statute book is full of legislation that restrains speech. Most, if not all, of this legislation applies online as well as offline. Some of it applies more strictly online than offline. These laws set boundaries: defamation, obscenity, intellectual property rights, terrorist content, revenge porn, harassment, incitement to racial and religious hatred and many others. Those boundaries represent a balance between freedom of speech and harm to others. It is for each of us to stay inside the boundaries, wherever they may be set. Within those boundaries we are free to say what we like, whatever someone in authority may think. Independent courts, applying principles, processes and presumptions designed to protect freedom of speech, adjudge alleged infractions according to clear, certain laws enacted by Parliament.

But much of the current discussion centres on something quite different: regulation by regulator. This model concentrates discretionary power in a state agency. In the UK the model is to a large extent the legacy of the 1980s Thatcher government, which started the OF trend by creating OFTEL (as it then was) to regulate the newly liberalised telecommunications market. A powerful regulator, operating flexibly within broadly stated policy goals, can be rule-maker, judge and enforcer all rolled into one.

That may be a long-established model for economic regulation of telecommunications competition, energy markets and the like. But when regulation by regulator trespasses into the territory of speech it takes on a different cast. Discretion, flexibility and nimbleness are vices, not virtues, where rules governing speech are concerned. The rule of law demands that a law governing speech be general in the sense that it applies to all, but precise about what it prohibits. Regulation by regulator is the converse: targeted at a specific group, but laying down only broadly stated goals that the regulator should seek to achieve.
As OFCOM puts it in its recent discussion paper ‘Addressing Harmful Online Content’: “What has worked in a broadcasting context is having a set of objectives laid down by Parliament in statute, underpinned by detailed regulatory guidance designed to evolve over time. Changes to the regulatory requirements are informed by public consultation.”

Where exactly the limits on freedom of speech should lie is a matter of intense, perpetual, debate. It is for Parliament to decide, after due consideration, whether to move the boundaries. It is anathema to both freedom of speech and the rule of law for Parliament to delegate to a regulator the power to set limits on individual speech.

It becomes worse when a document like the government’s Internet Safety Strategy Green Paper takes aim at subjective notions of social harm and unacceptability rather than strict legality and illegality according to the law. ‘Safety’ readily becomes an all-purpose banner under which to proceed against nebulous categories of speech which the government dislikes but cannot adequately define.

Also troubling is the frequently erected straw man that the internet is unregulated. This blurs the vital distinction between the general law and regulation by regulator. Participants in the debate are prone to debate regulation as if the general law did not exist.

Occasionally the difference is acknowledged, but not necessarily as a virtue. The OFCOM discussion paper observes that by contrast with broadcast services subject to long established regulation, some newer online services are ‘subject to little or no regulation beyond the general law’, as if the general law were a mere jumping-off point for further regulation rather than the democratically established standard for individual speech.

OFCOM goes on that this state of affairs was “not by design, but the outcome of an evolving system”. However, a deliberate decision was taken with the Communications Act 2003 to exclude OFCOM’s jurisdiction over internet content in favour of the general law alone.

Moving away from individual speech, the OFCOM paper characterises the fact that online newspapers are not subject to the impartiality requirements that apply to broadcasters as an inconsistency. Different, yes. Inconsistent, no.

Periodically since the 1990s the idea has surfaced that as a result of communications convergence broadcast regulation should, for consistency, apply to the internet. With the advent of video over broadband aspects of the internet started to bear a superficial resemblance to television. The pictures were moving, send for the TV regulator.

EU legislators have been especially prone to this non-sequitur. They are currently enacting a revision of the Audiovisual Media Services Directive that will require a regulator to exercise some supervisory powers over video sharing platforms.

However broadcast regulation, not the rule of general law, is the exception to the norm. It is one thing for a body like OFCOM to act as broadcast regulator, reflecting television’s historic roots in spectrum scarcity and Reithian paternalism. Even that regime is looking more and more anachronistic as TV becomes less and less TV-like. It is quite another to set up a regulator with power to affect individual speech. And it is no improvement if the task of the regulator is framed as setting rules about the platforms’ rules. The result is the same: discretionary control exercised by a state entity (however independent of the government it may be) over users’ speech, via rules that Parliament has not specifically legislated.

It is true, as the OFCOM discussion paper notes, that the line between broadcast and non-broadcast regulation means that the same content can be subject to different rules depending on how it is accessed. If that is thought to be anomalous, it is a small price to pay for keeping regulation by regulator out of areas in which it should not tread.

The House of Commons Media Culture and Sport Committee, in its July 2018 interim report on fake news, recommended that the government should use OFCOM’s broadcast regulation powers, “including rules relating to accuracy and impartiality”, as “a basis for setting standards for online content”. It is perhaps testament to the loss of perspective that the internet routinely engenders that a Parliamentary Committee could, in all seriousness, suggest that accuracy and impartiality rules should be applied to the posts and tweets of individual social media users.

Setting regulatory standards for content means imposing more restrictive rules than the general law. That is the regulator’s raison d’etre. But the notion that a stricter standard is a higher standard is problematic when applied to what we say. Consider the frequency with which environmental metaphors – toxic speech, polluted discourse – are now applied to online speech. For an environmental regulator, cleaner may well be better. The same is not true of speech. Offensive or controversial words are not akin to oil washed up on the seashore or chemicals discharged into a river. Objectively ascertainable physical damage caused by an oil spill bears no relation to a human being evaluating and reacting to the merits and demerits of what people say and write.

If we go further and transpose the environmental precautionary principle to speech we then have prior restraint – the opposite of the presumption against prior restraint that has long been regarded as a bulwark of freedom of expression. All the more surprising then that The Times, in its July Ofnet editorial, should complain of the internet that “by the time police and prosecutors are involved the damage has already been done”. That is an invitation to step in and exercise prior restraint.

As an aside, do the press really think that Ofnet would not before long be knocking on their doors to discuss their online editions? That is what happened when ATVOD tried to apply the Audiovisual Media Services Directive to online newspapers that incorporated video. Ironically it was The Times' sister paper, the Sun, that successfully challenged that attempt.

The OFCOM discussion paper observes that there are “reasons to be cautious over whether [the broadcast regime] could be exported wholesale to the internet”. Those reasons include that “expectations of protection or [sic] freedom of expression relating to conversations between individuals may be very different from those relating to content published by organisations”.

US district judge Dalzell said in 1996: “As the most participatory form of mass speech yet developed, the internet deserves the highest protection from governmental intrusion”. The opposite view now seems to be gaining ground: that we individuals are not to be trusted with the power of public speech, that it was a mistake ever to allow anyone to speak or write online without the moderating influence of an editor, and that by hook or by crook the internet genie must be stuffed back in its bottle.

Regulation by regulator, applied to speech, harks back to the bad old days of the Lord Chamberlain and theatres. In a free and open society we do not appoint a Lord Chamberlain of the Internet – even one appointed by Parliament rather than by the Queen - to tell us what we can and cannot say online, whether directly or via the proxy of online intermediaries. The boundaries are rightly set by general laws.

We can of course debate what those laws should be. We can argue about whether intermediary liability laws are appropriately set. We can consider what tortious duties of care apply to online intermediaries and whether those are correctly scoped. We can debate the dividing line between words and conduct. We can discuss the vexed question of an internet that is both reasonably safe for children and fit for grown-ups. We can think about better ways of enforcing laws and providing victims of unlawful behaviour with remedies. These are matters for public debate and for Parliament and the general law within the framework of fundamental rights. None of this requires regulation by regulator. Quite the opposite.

Nor is it appropriate to frame these matters of debate as (in the words of The Times) “an opportunity to impose the rule of law on a legal wilderness where civic instincts have been suspended in favour of unthinking libertarianism for too long”. People who use the internet, like people everywhere, are subject to the rule of law. The many UK internet users who have ended up before the courts, both civil and criminal, are testament to that. Disagreement with the substantive content of the law does not mean that there is a legal vacuum.

What we should be doing is take a hard look at what laws do and don’t apply online (the Law Commission is already looking at social media offences), revise those laws if need be and then look at how they can most appropriately be enforced.

This would involve looking at areas that it is tempting for a government to avoid, such as access to justice. How can we give people quick and easy access to independent tribunals with legitimacy to make decisions about online illegality? The current court system cannot provide that service at scale, and it is quintessentially a job for government rather than private actors. More controversially, is there room for greater use of powers such as ‘internet ASBOs’ to target the worst perpetrators of online illegality? The existing law contains these powers, but they seem to be little used.

It is hard not to think that an internet regulator would be a politically expedient means of avoiding hard questions about how the law should apply to people’s behaviour on the internet. Shifting the problem on to the desk of an Ofnet might look like a convenient solution. It would certainly enable a government to proclaim to the electorate that it had done something about the internet. But that would cast aside many years of principled recognition that individual speech should be governed by the rule of law, not the hand of a regulator.

If we want safety, we should look to the general law to keep us safe. Safe from the unlawful things that people do offline and online. And safe from a Lord Chamberlain of the Internet.



Thursday, 13 September 2018

Big Brother Watch v UK – implications for the Investigatory Powers Act?

Today I have been transported back in time, to that surreal period following the Snowden revelations in 2013 when anyone who knew anything about the previously obscure RIPA (Regulation of Investigatory Powers Act 2000) was in demand to explain how it was that GCHQ was empowered to conduct bulk interception on a previously unimagined scale.

The answer (explained here) lay in the ‘certificated warrants’ regime under S.8(4) RIPA for intercepting external communications. ‘External’ communications were those sent or received outside the British Islands, thus including communications with one end in the British Islands.

Initially we knew about GCHQ’S TEMPORA programme and, as the months stretched into years, we learned from the Intelligence and Security Committee of the importance to GCHQ of bulk intercepted metadata (related communications data, in RIPA jargon):

“We were surprised to discover that the primary value to GCHQ of bulk interception was not in the actual content of communications, but in the information associated with those communications.” [80] (Report, March 2015)
According to a September 2015 Snowden disclosure, bulk intercepted communications data was processed and extracted into query focused datasets such as KARMA POLICE, containing billions of rows of data. David (now Lord) Anderson QC’s August 2016 Bulk Powers Review gave an indication of some techniques that might be used to analyse metadata, including unseeded pattern analysis.

Once the Investigatory Powers Bill started its journey into legislation the RIPA terminology started to fade. But today it came back to life, with the European Court of Human Rights judgment in Big Brother Watch and others v UK.

The fact that the judgment concerns a largely superseded piece of legislation does not necessarily mean it is of historic interest only. The Court held that both the RIPA bulk interception regime and its provisions for acquiring communications data from telecommunications operators violated Article 8 (privacy) and 10 (freedom of expression) of the European Convention on Human Rights. The interesting question for the future is whether the specific aspects that resulted in the violation have implications for the current Investigatory Powers Act 2016.

The Court expressly did not hold that bulk interception per se was impermissible. But it said that a bulk interception regime, where an agency has broad discretion to intercept communications, does have to be surrounded with more rigorous safeguards around selection and examination of intercepted material. [338]

It is difficult to be categoric about when the absence of a particular feature or safeguard will or will not result in a violation, since the Court endorsed its approach in Zakharov whereby in assessing whether a regime is ‘in accordance with the law’ the Court can have regard to certain factors which are not minimum requirements, such as arrangements for supervising the implementation of secret surveillance measures, any notification mechanisms and the remedies provided for by national law. [320]

That said, the Court identified three failings in RIPA that were causative of the violations. These concerned selection and examination of intercepted material, related communications data, and journalistic privilege.

Selection and examination of intercepted material

The Court held that lack of oversight of the entire selection process, including the selection of bearers for interception, the selectors and search criteria for filtering intercepted communications, and the selection of material for examination by an analyst, meant that the RIPA S. 8(4) bulk interception regime did not meet the “quality of law” requirement under Article 8 and was incapable of keeping the “interference” with Article 8 to what is “necessary in a democratic society”.

As to whether the IPAct suffers from the same failing, a careful study of the Act may lead to the conclusion that when considering whether to approve a bulk interception warrant the independent Judicial Commissioner should indeed look at the entire selection process. Indeed I argued exactly that in a submission to the Investigatory Powers Commissioner. Whether it is clear that that is the case and, even if it is, whether the legislation and supporting public documents are sufficiently clear as to the level of granularity at which such oversight should be conducted, is another matter.

As regards selectors (the Court’s greatest concern), the Court observed that while it is not necessary that selectors be listed in the warrant, mere after the event audit and the possibility of an application to the IPT was not sufficient. The search criteria and selectors used to filter intercepted communications should be subject to independent oversight. [340]

Related communications data

The RIPA safeguards for examining bulk interception product (notably the certificate to select a communication for examination by reference to someone known to be within the British Islands) did not apply to ‘related communications data’ (RCD). RCD is communications data (in practice traffic data) acquired by means of the interception.

The significance of the difference in treatment is increased when it is appreciated that it includes RCD obtained from incidentally acquired internal communications and that there is no requirement under RIPA to discard such material. As the Court noted: “The related communications data of all intercepted communications – even internal communications incidentally intercepted as a “by-catch” of a section 8(4) warrant – can therefore be searched and selected for examination without restriction.” [348]

The RCD regime under RIPA can be illustrated graphically:









In this regard the IPAct is virtually identical. We now have tweaked definitions of ‘overseas-related communications’ and ‘secondary data’ instead of external communications and RCD, but the structure is the same:


















The only substantive additional safeguard is that examination of secondary data has to be for stated operational purposes (which can be broad).

The Court accepted that under RIPA, as the government argued (and had argued in the original IPT proceedings):
“the effectiveness of the [British Islands] safeguard [for examination of content] depends on the intelligence services having a means of determining whether a person is in the British Islands, and access to related communications data would provide them with that means.” [354]
 But it went on:

“Nevertheless, it is a matter of some concern that the intelligence services can search and examine “related communications data” apparently without restriction. While such data is not to be confused with the much broader category of “communications data”, it still represents a significant quantity of data. The Government confirmed at the hearing that “related communications data” obtained under the section 8(4) regime will only ever be traffic data.  
However, … traffic data includes information identifying the location of equipment when a communication is, has been or may be made or received (such as the location of a mobile phone); information identifying the sender or recipient (including copy recipients) of a communication from data comprised in or attached to the communication; routing information identifying equipment through which a communication is or has been transmitted (for example, dynamic IP address allocation, file transfer logs and e-mail headers (other than the subject line of an e-mail, which is classified as content)); web browsing information to the extent that only a host machine, server, domain name or IP address is disclosed (in other words, website addresses and Uniform Resource Locators (“URLs”) up to the first slash are communications data, but after the first slash content); records of correspondence checks comprising details of traffic data from postal items in transmission to a specific address, and online tracking of communications (including postal items and parcels). [355] 

In addition, the Court is not persuaded that the acquisition of related communications data is necessarily less intrusive than the acquisition of content. For example, the content of an electronic communication might be encrypted and, even if it were decrypted, might not reveal anything of note about the sender or recipient. The related communications data, on the other hand, could reveal the identities and geographic location of the sender and recipient and the equipment through which the communication was transmitted. In bulk, the degree of intrusion is magnified, since the patterns that will emerge could be capable of painting an intimate picture of a person through the mapping of social networks, location tracking, Internet browsing tracking, mapping of communication patterns, and insight into who a person interacted with. [356]

Consequently, while the Court does not doubt that related communications data is an essential tool for the intelligence services in the fight against terrorism and serious crime, it does not consider that the authorities have struck a fair balance between the competing public and private interests by exempting it in its entirety from the safeguards applicable to the searching and examining of content. While the Court does not suggest that related communications data should only be accessible for the purposes of determining whether or not an individual is in the British Islands, since to do so would be to require the application of stricter standards to related communications data than apply to content, there should nevertheless be sufficient safeguards in place to ensure that the exemption of related communications data from the requirements of section 16 of RIPA is limited to the extent necessary to determine whether an individual is, for the time being, in the British Islands.” [357]

 This is a potentially significant holding. In IPAct terms this would appear to require that selection for examination of secondary data for any purpose other than determining whether an individual is, for the time being, in the British Islands should be subject to different and more stringent limitations and procedures.

It is also noteworthy that, unlike RIPA, the IP Act contains provisions enabling some categories of content to be extracted from intercepted communications and treated as secondary data.

Journalistic privilege

 The Court found violations of Article 10 under both the bulk interception regime and the regime for acquisition of communications data from telecommunications service providers.

For bulk interception, the court focused on lack of protections at the selection and examination stage: “In the Article 10 context, it is of particular concern that there are no requirements – at least, no “above the waterline” requirements – either circumscribing the intelligence services’ power to search for confidential journalistic or other material (for example, by using a journalist’s email address as a selector), or requiring analysts, in selecting material for examination, to give any particular consideration to whether such material is or may be involved. Consequently, it would appear that analysts could search and examine without restriction both the content and the related communications data of these intercepted communications.” [493]

For communications data acquisition, the court observed that the protections for journalistic privilege only applied where the purpose of the application was to determine a source; they did not apply in every case where there was a request for the communications data of a journalist, or where such collateral intrusion was likely. [499]

This may have implications for those IPAct journalistic safeguards that are limited to applications made ‘for the purpose of’ intercepting or examining journalistic material or sources.




Tuesday, 5 June 2018

Regulating the internet – intermediaries to perpetrators

Nearly twenty five years after the advent of the Web, and longer since the birth of the internet, we still hear demands that the internet should be regulated - for all the world as if people who use the internet were not already subject to the law. The May 2017 Conservative manifesto erected a towering straw man: “Some people say that it is not for government to regulate when it comes to technology and the internet. We disagree.”  The straw man even found its way into the title of the current House of Lords Communications Committee inquiry: "The Internet: to regulate or not to regulate?".

The choice is not between regulating or not regulating.  If there is a binary choice (and there are often many shades in between) it is between settled laws of general application and fluctuating rules devised and applied by administrative agencies or regulatory bodies; it is between laws that expose particular activities, such as search or hosting, to greater or less liability; or laws that visit them with more or less onerous obligations; it is between regimes that pay more or less regard to fundamental rights; and it is between prioritising perpetrators or intermediaries.

Such niceties can be trampled underfoot in the rush to do something about the internet. Existing generally applicable laws are readily overlooked amid the clamour to tame the internet Wild West, purge illegal, harmful and unacceptable content, leave no safe spaces for malefactors and bring order to the lawless internet.

A recent article by David Anderson Q.C. asked the question 'Who governs the Internet?' and spoke of 'subjecting the tech colossi to the rule of law'. The only acceptable answer to the ‘who governs?’ question is certainly 'the law'. We would at our peril confer the title and powers of Governor of the Internet on a politician, civil servant, government agency or regulator. But as to the rule of law, we should not confuse the existence of laws with disagreement about what, substantively, those laws should consist of. Bookshops and magazine distributors operate, for defamation, under a liability system with some similarities to the hosting regime under the Electronic Commerce Directive. No-one has, or one hopes, would suggest that as a consequence they are not subject to the rule of law.

It is one thing to identify how not to regulate, but it would be foolish to deny that there are real concerns about some of the behaviour that is to be found online. The government is currently working towards a White Paper setting out proposals for legislation to tackle “a range of both legal and illegal harms, from cyberbullying to online child sexual exploitation”. What is to be done about harassment, bullying and other abusive behaviour that is such a significant contributor to the current furore?

Putting aside the debate about intermediary liability and obligations, we could ask whether we are making good enough use of the existing statute book to target perpetrators. The criminal law exists, but can be seen as a blunt instrument. It was for good reason that the Director of Public Prosecutions issued lengthy prosecutorial guidelines for social media offences.

Occasionally the idea of an ‘Internet ASBO’ has been floated. Three years ago a report of the All-Party Parliamentary Inquiry into Antisemitism recommended, adopting an analogy with sexual offences prevention orders, that the Crown Prosecution Service should undertake a “review to examine the applicability of prevention orders to hate crime offences and if appropriate, take steps to implement them.” 

A possible alternative, however, may lie elsewhere on the statute book. The Anti-Social Behaviour, Crime and Policing Act 2014 contains a procedure for some authorities to obtain a civil anti-social behaviour injunction (ASBI) against someone who has engaged or threatens to engage in anti-social behaviour, meaning “conduct that has caused, or is likely to cause, harassment, alarm or distress to any person”. That succintly describes the kind of online behaviour complained of.

Nothing in the legislation restricts an ASBI to offline activities. Indeed over 10 years ago The Daily Telegraph reported an 'internet ASBO' made under predecessor legislation against a 17 year old who had been posting material on the social media platform Bebo, banning him from publishing material that was threatening or abusive and promoted criminal activity.  

ASBIs raise difficult questions of how they should be framed and of proportionality, and there may be legitimate concerns about the broad terms in which anti-social behaviour is defined. Nevertheless the courts to which applications are made have the societal and institutional legitimacy, as well as the experience and capability, to weigh such factors.

The Home Office Statutory Guidance on the use of the 2014 Act powers (revised in December 2017) makes no mention of their use in relation to online behaviour.  That could perhaps usefully be revisited. Another possibility might be to explore extending the ability to apply for an ASBI beyond the authorities, for instance to some voluntary organisations. 

Whilst the debate about how to regulate internet activities and the role of intermediaries is not about to go away, we should not let that detract from the importance of focusing on remedies against the perpetrators themselves.

Monday, 30 April 2018

The Electronic Commerce Directive – a phantom demon?

Right now the ECommerce Directive – or at any rate the parts that shield hosting intermediaries from liability for users’ content - is under siege. The guns are blazing from all directions: The Prime Minister’s speech in Davos, Culture Secretary Matt Hancock’s speech at the Oxford Media Convention on 12 March 2018 and the European Commission’s Recommendation on Tackling Illegal Content Online all take aim at the shield, or at its linked bar on imposing general monitoring obligations on conduits, caches and hosts. The proposed EU Copyright Directive is attacking from the flanks.

The ECommerce Directive is, of course, part of EU law. As such the UK could, depending on what form Brexit takes, diverge from it post-Brexit. The UK government has identified the Directive as a possible divergence area and Matt Hancock's Department for Digital, Culture, Media and Sport (DCMS) is looking at hosting liability.
The status quo

Against this background it is worth looking behind the polarised rhetoric that characterises this topic and, before we decide whether to take a wrecking ball to the Directive's liability provisions, take a moment to understand how they work.  As so often with internet law, the devil revealed by the detail is a somewhat different beast from that portrayed in the sermons.
We can already sense something of that disparity. In her Davos speech Theresa May said:
“As governments, it is also right that we look at the legal liability that social media companies have for the content shared on their sites. The status quo is increasingly unsustainable as it becomes clear these platforms are no longer just passive hosts.”
If this was intended to question existing platform liability protections, it was a curious remark. Following the CJEU decisions in LVMH v Google France and L’Oreal v eBay, if a hosting platform treats user content non-neutrally it will not have liability protection for that content. By non-neutrally the CJEU means that the operator "plays an active role of such a kind as to give it knowledge of, or control over, those data".

So the status quo is that if a platform does not act neutrally as a passive host it is potentially exposed to legal liability.
By questioning the status quo did the Prime Minister mean to advocate greater protection for platforms who act non-neutrally than currently exists? In the febrile atmosphere that currently surrounds social media platforms that seems unlikely, but it could be the literal reading of her remarks. If not, is it possible that the government is taking aim at a phantom?
Matt Hancock's speech on 12 March added some detail:

"We are looking at the legal liability that social media companies have for the content shared on their sites. Because it’s a fact on the web that online platforms are no longer just passive hosts.
But this is not simply about applying publisher or broadcaster standards of liability to online platforms.
There are those who argue that every word on every platform should be the full legal responsibility of the platform. But then how could anyone ever let me post anything, even though I’m an extremely responsible adult?
This is new ground and we are exploring a range of ideas… including where we can tighten current rules to tackle illegal content online… and where platforms should still qualify for ‘host’ category protections."
It is debatable whether this is really new ground when these issues have been explored since the advent of bulletin boards and then the internet. Nevertheless there can be no doubt that the rise of social media platforms has sparked off a new round of debate.
  
Sectors, platforms and activities

The activities of platforms are often approached as if they constitute a homogenous whole: the platform overall is either a passive host or it is not. Baroness Kidron, opening the House of Lords social media debate on 11 January 2018, went further, drawing an industry sector contrast between media companies and tech businesses:
“Amazon has set up a movie studio. Facebook has earmarked $1 billion to commission original content this year. YouTube has fully equipped studios in eight countries."
She went on:  

"The Twitter Moments strand exists to “organize and present compelling content”. Apple reviews every app submitted to its store, “based on a set of technical, content, and design criteria”. By any other frame of reference, this commissioning, editing and curating is for broadcasting or publishing.”
However the ECommerce Directive does not operate at a business sector level, nor at the level of a platform treated as a whole. It operates at the level of specific activities and items of content. If an online host starts to produce its own content like a media company, then it will not have the protection of the Directive for that activity. Nor will it have protection for user content that it selects and promotes so as to have control over it.  Conversely if a media or creative company starts to host user-generated content and treats it neutrally, it will have hosting protection for that activity.  

In this way the Directive adapts to changes in behaviour and operates across business models. It is technology-neutral and business sector-agnostic. A creative company that develops an online game or virtual world will have hosting protection for what users communicate to each other in-world and for what they make using the tools provided to them.
The line that the Directive draws is not between media and tech businesses, nor between simple and complex platforms, but at the fine-grained level of individual items of content. The question is always whether the host has intervened at the level of a particular item of content to the extent that (in the words of one academic)[1], it might be understood to be their own. If it does that, then the platform will not have hosting protection for that item of content. It will still have protection for other items of user-generated content in relation to which it has remained neutral.  The scheme of the Directive is illustrated in this flowchart.


The analysis can be illustrated by an app such as one that an MP might provide for the use of constituents. Videos made by the MP would be his or her own content, not protected by the hosting provisions. If the app allows constituents to post comments to a forum, those would attract hosting protection. If the MP selected and promoted a comment as Constituent Comment of the Day, he or she would have intervened sufficiently to lose hosting protection for that comment.

This activity-based drawing of the line is not an accident. It was the declared intention of the promoters of the Directive. The European Commission said in its Proposal for the Directive back in 1998:
"The distinction as regards liability is not based on different categories of operators but on the specific types of activities undertaken by operators. The fact that a provider qualifies for an exemption from liability as regards a particular act does not provide him with an exemption for all his other activities." 
Courts in Ireland (Mulvaney v Betfair), the UK (Kaschke v Gray, England and Wales Cricket Board v Tixdaq) and France (TF1 v Dailymotion) have reached similar conclusions (albeit in Tixdaq only a provisional conclusion).  Most authoritatively, the CJEU in L'Oreal v eBay states that a host that has acted non-neutrally in relation to certain data cannot rely on the hosting protection in the case of those data (judgment, para [116] - and see flowchart above).

The report of the Committee on Standards in Public Life on "Intimidation in Public Life" also discussed hosting liability.  It said:
“Parliament should reconsider the balance of liability for social media content. This does not mean that the social media companies should be considered fully to be the publishers of the content on their sites. Nor should they be merely platforms, as social media companies use algorithms that analyse and select content on a number of unknown and commercially confidential factors.”
Analysing and selecting user content so as to give the operator control over the selected content would exclude that content from hosting protection under the ECommerce Directive. The Committee's suggestion that such activities should have a degree of protection short of full primary publisher liability would seem to involve increasing, not decreasing, existing liability protection. That is the opposite of what, earlier in the Report, the Committee seemed to envisage would be required: “The government should seek to legislate to shift the balance of liability for illegal content to the social media companies away from them being passive ‘platforms’ for illegal content.”

Simple and complex platforms
The question of whether a hosting platform has behaved non-neutrally in relation to any particular content is also unrelated to the simplicity or complexity of the platform. The Directive has been applied to vanilla web hosting and structured, indexed platforms alike.  That is consistent with the contextual background to the Directive, which included court decisions on bulletin boards (in some ways the forerunners of today’s social media sites) and the Swedish Bulletin Boards Act 1998.

The fact that the ECD encompasses simple and complex platforms alike leads to a final point: the perhaps underappreciated variety of activities that benefit from hosting protection.  They include, as we have seen, online games and virtual worlds. They would include collaborative software development environments such as GitHub. Cloud-based word processor applications, any kind of app with a user-generated content element, website discussion forums, would all be within scope. By focusing on activities defined in a technology-neutral way the Directive has transcended and adapted to many different evolving industries and kinds of business.
The voluntary sector

Nor should we forget the voluntary world. Community discussion forums are (subject to one possible reservation) protected by the hosting shield.  The reservation is that the ECD covers services of a kind ‘normally provided for remuneration’. The reason for this is that the ECD was an EU internal market Directive, based on the Services title of the TFEU. As such it had to be restricted to services with an economic element. 
In line with EU law on the topic the courts have interpreted this requirement generously. Nevertheless there remains a nagging doubt about the applicability of the protection to purely voluntary activities.  The government could do worse than consider removing the "normally provided for remuneration" requirement so that the Mumsnets, the sports fan forums, the community forums of every kind can clearly be brought within the hosting protection.

[Amended 28 July 2018 with comment added after Matt Hancock quotation and addition of hosting liability flowchart.]




[1]               C. Angelopoulos, 'On Online Platforms and the Commission’s New Proposal for a Directive on Copyright in the Digital Single Market' (January 2017).