Monday, 27 July 2020

The penultimate word on copyright intermediary liability?

The 16 July 2020 Opinion of Advocate General Saugmandsgaard Øe in YouTube/Cyando is something of a tour de force, attempting in 256 closely reasoned paragraphs to construct a Grand Unified Theory of how the intermediary liability provisions of the ECommerce Directive 2000 and the communication to the public provisions of the Copyright InfoSoc Directive 2001 can be made to sit comfortably together when applied to platforms to which users can upload and share content: video streams in the case of YouTube, and a private file storage facility with the ability to share download links in the case of Cyando.
The AG’s Opinion will not be the last word on these topics –the CJEU’s judgment in YouTube/Cyando itself will follow, as well as several other pending CJEU references. The judgments may or may not adopt the AG’s approach. However, the Opinion will be difficult to surpass for its thorough analysis of the issues and in its heroic attempt to bring coherence to a fiercely contested and mutable area of the law.
However, for platforms that fall within Article 17 of the new Digital Copyright Directive the Copyright InfoSoc Directive provisions on which the AG has opined will in due course be superseded (albeit not apparently in the UK, which has said that it has no plans to implement the Digital Copyright Directive). Article 17 enacts a sui generis version of the communication to the public right and a corresponding customised liability safe harbour that will, for platforms within Article 17, replace the general Article 14 ECD shield. Notably, the AG rejected an argument that the new Directive merely clarifies what has always been the law under the existing Copyright InfoSoc Directive and the ECommerce Directive.
Article 17, we should also remember, is under challenge in the CJEU by Poland, claiming that the illegal upload prevention provision breaches Article 11 of the Charter. From that perspective, the views of the AG regarding balance of rights under the Charter will be of interest.
Why alignment?
Aligning the two Directives is an exercise peculiar to copyright. Generally speaking, the scope of the ECommerce Directive does not have to be reconciled with that of underlying substantive laws. ECD Articles 12 to 15 are an independent horizontal overlay over an enormous range of substantive civil and criminal liability across all Member States. They provide a uniform liability shield regardless of the scope of the underlying substantive law:
“Article 14(1) of Directive 2000/31 applies, horizontally, to all forms of liability which the providers in question may incur in respect of any kind of information which they store at the request of the users of their services, whatever the source of that liability, the field of law concerned and the characterisation or exact nature of the liability.” [138]

So, if a hosting provider loses the protection of Article 14 through not removing an item of content expeditiously after becoming aware of its illegality, liability does not necessarily ensue. That has to be assessed under the substantive underlying Member State law. As the AG Opinion puts it:
“the purpose of [Article 14] is not to determine positively the liability of a provider. It simply limits negatively the situations in which it can be held liable on that basis.” [134]

Copyright, however, is somewhat different. The ECD and the 2001 Copyright InfoSoc Directive went through the EU legislative process at about the same time and were intended, together, to establish (ECD, Recital 50):
“a clear framework of rules relevant to the issue of liability of intermediaries for copyright and relating [sic] rights infringements at Community level.”

The Copyright InfoSoc Directive noted that liability for activities in the network environment concerned copyright and related rights as well as other areas. The ECD:
“… provides a harmonised framework of principles and provisions relevant inter alia to important parts of this Directive.” (Copyright InfoSoc Directive, Recital 16)

This is the basis on which it is suggested that the scope of the two Directives should be aligned. If so, however, which should be the template? Since the ECD is horizontal and thus of general application, it would be logical that any such interpretative exercise should focus on bringing the Copyright Directive into line with the ECD, not vice versa. Recital 16 of the Copyright Directive explicitly cedes precedence to the ECD: “This Directive is without prejudice to provisions relating to liability in that Directive.”
The AG’s view is that the criteria for communication to the public and the conditions for application Art 14 must and can be interpreted consistently in order to avoid, in practice, any overlap between them.
Fundamental rights compatibility
The AG addresses compatibility with the EU Charter of Fundamental Rights, opining that the ‘high level of protection’ demanded by the Copyright InfoSoc Directive does not necessarily equate to ‘maximum protection’. Although copyright is protected as a fundamental right in the Charter, that right is not absolute and must generally be balanced with other fundamental rights and interests. The Court, he says, seeks a reasonable interpretation to achieve that.
He emphasises that a general monitoring obligation on hosts to seek illegal information and activity was held by the CJEU in SABAM/Netlog to be contrary not only to Article 15 ECD, but also the Charter. It would introduce a serious risk of undermining the fundamental rights involved: platform operators’ freedom to conduct a business (Article 16), users’ freedom of expression (Article 11) and also, in his view, freedom of the arts (Article 13).
For freedom of the arts, an obligated filtering tool might not distinguish adequately between legal and illegal content, leading to the blocking of legal content. That would endanger online creativity, in that maximum protection of some forms of intellectual creativity would be to the detriment of other forms of creativity which are also positive for society.
Approach to alignment
Since the general approach of the AG is to mould communication to the public, in its application to platforms, to fit Article 14 ECD, it is logical to focus primarily on his exposition of Article 14 ECD. The Opinion is also a useful reminder of some aspects of Article 14 that are easily overlooked or misunderstood.
Generally, the AG takes the active/passive distinction to qualify for hosting protection under the ECommerce Directive, then applies it to the contrast drawn in the Copyright InfoSoc Directive between (a) someone who, by providing ‘physical facilities’, acts as an intermediary (Recital 27) and (b) someone who “intervenes actively in the communication to the public of works” ([75]).
He emphasises that the distinction has to be drawn in relation to specific content. (This focus on whether conduct is active or passive in relation to a given item of content contrasts with a common misconception that the active/passive distinction under Article 14 ECD involves an assessment of the overall role of the platform.)
He expresses reservations about the line of CJEU caselaw in GS Media, Filmspeler and Pirate Bay, which he characterises as extending the CTP right into unharmonised areas of secondary liability, founded on facilitation and knowledge of illegality. Nevertheless, he goes on to discuss the application of that reasoning to the instant cases.
For this purpose he applies to communication to the public the knowledge and awareness principles of ECD Article 14 ([111]). As with the active/passive distinction, this focuses on specific activities and information, as opposed to (in this context) consequences flowing from generalised awareness of illegality.
In summary, therefore, the AG equates “primary” communication to the public – intervention in an actual or possible transmission – with the active/passive distinction that determines initial qualification for ECD Art 14 protection; and equates “secondary” communication to the public with the ECD's knowledge-based conditions for losing the liability protection of Article 14.
Article 14 transposed to “primary” communication to the public
The following are some key points in the AG’s discussion of Article 14. References are to paragraph numbers, together with (where relevant) an italicised citation to the closest equivalent point in the AG’s discussion of the communication to the public right.
  • Only a shield The purpose of Article 14 is to act as a liability shield, not to provide a positive determination of liability. [134]
  • Horizontal application The Article 14 exemption applies horizontally, independently of the characterisation of the liability by the underlying substantive law.  It therefore covers both primary and secondary liability for information provided and activities initiated by users. [138]
  • Additional activities Activities additional to storage provided as part of the service do not prevent applicability of Art 14. [145] Nevertheless, the exemption concerns only liability that may result from information provided by users. It does not cover any other aspect of the provider’s activity. [146]
  • Active/passive hosting As regards the active/passive hosting distinction established by the CJEU:
    • Inherent control The capacity for control inherent in any hosting activity cannot amount to an active role. [151] (cf CTP [73]: being an important, or even crucial, link in the chain does not amount to an essential role.)
    • Specific content An active role must relate to specific content, where by the nature of its activity the intermediary is deemed to acquire intellectual control of that content. [152] (cf CTP [75]: intentional decision to communicate a given work.)
    • A distinction should be made between controlling the conditions for display of user information and controlling the content of that information.  [160], [162] (cf CTP [75]: determines the content in some other way.)
  • Examples of an active role include:
    • Selecting stored information. [152] (CTP: [75])
    • Active involvement in the content of stored information in some other way. [152] (CTP: [75] (“determines it in some other way”)).
    • Presenting stored information to the public in such a way that it appears to be the host’s own. [152] (CTP: [75]) As to that, in the Advocate General’s view YouTube does not do so because it indicates which user uploaded the video. [156] (CTP: [83]); and Cyando does not do so because the average, reasonably informed, internet user knows that files stored by a file hosting/sharing platform do not, as a rule, come from the operator. [156]
  • Examples that are not an active role include:
    • Automatic uploading without prior viewing or selection [154] (CTP: [78])
    • Providing access to content or the ability to download, by purely technical and automatic processes. [155]
    • Structuring the way in which videos are presented and integrating into a standard viewing interface [157] - [159] (CTP: [81], [82])
    • Processing of search results and indexing under different categories [157] (CTP: [81], [82])
    • Integrated search function [157], [160] (CTP: [81])
    • Automated recommendation of videos similar to those previously viewed [161], [162] (CTP: [84])
    • Remuneration by advertising [163] (CTP: [86])
    • Proactively carrying out checks for illegal content [166] (CTP: [78])

Points specific to communication to the public

For communication to the public, the AG also rejected arguments that grant of a licence by users to the platform amounted to active intervention. That would be different if the platform re-used content under the licence. [85]

Generally, profit was not a relevant criterion for the existence of communication to the public, but at most was an indicator. Revenue models related to attractiveness of content are a less useful indicator where the provision of physical facilities is generally carried on for profit. [86] to [88]

“Secondary” communication to the public and Article 14 knowledge and awareness

As to knowledge and awareness of illegality under the “secondary liability” interpretation of the communication to the public right, the AG suggested that they should be determined on the same principles as for Article 14 ECD. In other words, the conditions for loss of protection for a host under Article 14 should also found liability under the “secondary” communication to the public right as applicable to platforms such as YouTube and Cyando. 

In the AG's view knowledge of whether files are legal or illegal should not be presumed merely because the operator pursues a profit-making purpose. The GS Media presumption (aside from the fact that the CJEU seemed to have applied it only to hyperlinks [113]) should not be applied where the platform did not itself upload the content. That would contradict the Article 15 prohibition on imposing a general monitoring obligation. [115]

Further, the fact that an intermediary profits from illegal use should not be decisive. Any provider of goods or services that might be subject to both kinds of use will inevitably derive some of its profits from users who purchase or utilise them for illegal purposes. Other facts must therefore be demonstrated. [118]

The AG incorporated by reference ([111]) his discussion of the Article 14 knowledge and awareness provisions at [169] to [196]:
  • The knowledge of illegality required for the protection of Article 14 to be removed relates to specific illegal information. [172], [196]. That reflects the legislative purpose that Article 14 is intended to form the basis of notice and takedown procedures, when specific illegal information is brought to the attention of the service provider. [176]
  • Loss of protection based on general awareness is not compatible with the requirement of actual knowledge in Art 14(1)(a). [179]
  • As to awareness of facts and circumstances from which illegality is apparent:
    • the diligent economic operator referred to in L’Oreal v eBay is assumed, on the basis of objective factors of which it has actual knowledge relating to specific information on its servers, to perform sufficient diligence to realise the illegality of that information. It has no obligation to seek facts or circumstances in general. [182], [184], [185].
    • Since many situations regarding copyright infringement are ambiguous in the absence of context, a general obligation would create a risk of systematic over-removal in order to avoid risk of liability, posing an obvious problem in terms of freedom of expression. [189]
    • In order to be apparent, illegality must be manifest. This requirement seeks, in the AG’s view, to avoid forcing the operator itself to come to decisions on legally complex questions and, in doing so, turn itself into a judge of online legality. [187]
    • In order for illegality to be apparent, a notification must provide evidence that would allow a diligent economic operator in its situation to establish that character without difficulty and without conducting a detailed legal or factual examination. [190]
Bad faith

By way of exception to his propositions regarding specific knowledge, the Advocate General went on to discuss deliberate facilitation of illegal uses, for which general awareness of illegality would suffice to found liability. The AG discusses this bad faith exception in detail under communication to the public ([120] to [131]), incorporated by reference into his discussion of Article 14 [191].
The AG suggests that general and abstract knowledge of illegality should be sufficient to disapply Article 14 protection where the operator deliberately facilitates carrying out of illegal acts by users of its service. Where objective elements demonstrate the bad faith of the provider, then it should lose the benefit of the exemption.
The AG suggests the following principles for determining bad faith:
  • Intent to facilitate third party infringements should suffice [120]
  • The mere fact of enabling users to publish content by an automatic process and not carrying out a general pre-upload check cannot be tantamount to wilful blindness or negligence. [122]
  • Subject to notice of a specific infringement, mere negligence of a provider is (by definition) not sufficient to show that that provider is intervening ‘deliberately’ to facilitate copyright infringements committed by users. [122] 
  • The way in which a provider organises its service can, in some circumstances, show the ‘deliberate nature’ of its intervention in illegal acts of ‘communication to the public’ committed by users. [123] 
  • Characteristics of the service may demonstrate the bad faith of the provider in question, which may take the form of an intention to incite or wilful blindness towards such copyright infringements. [123] 
  • It is appropriate to check whether the characteristics of service (a) have an objective explanation and offer added value for lawful uses and (b) whether provider has taken reasonable steps to prevent unlawful use of the service. [124] 
  • But the service provider cannot be expected to check, in general, all user files before upload (cf ECD Art 15). Therefore reasonable steps should be a defence. Good faith will tend to be shown where the provider diligently fulfils ECD Art 14 withdrawal obligations [sic] or complies with any injunction obligations, or takes other voluntary measures. [124] 

It seems, therefore, that the AG is saying that the question of reasonable steps should be relevant only to rebut a presumption of bad faith that may arise if there are aspects of the service that do not, on the face of them, have an objective explanation and offer added value for lawful uses.
By way of guidance (although it would be a matter for the national court) the AG suggested that indexing and search functions [126], inserting advertisements into videos [121] (YouTube), and anonymity [129] and allowing users to generate download links [130] (Cyando) had an objective explanation and offered added value for lawful uses. On the other hand, the AG had doubts about Cyando’s practice of remunerating certain users based on the number of their downloaded files [131].
Stay-down arises in two separate contexts. The first is the argument that a host who is aware of the illegality of specific information on the platform should also be regarded as being aware of the illegality of further uploads of the same or equivalent information, and thus should not benefit from the Article 14 liability shield in respect of such future information.

The second question is whether, and if so how far, an injunction against an intermediary can require it proactively to prevent future uploads of the same or equivalent information to that specified in the injunction.
  • No imputed awareness of future uploads of the same information. As to the first issue, the AG considered that such a ‘stay-down’ interpretation of Article 14 would significantly alter its scope. It would require upload filtering not only of the same file as that notified, but of any file with equivalent content. Such an obligation would apply not only to providers that have such technology, but also those who do not have the resources to implement it. [194]
  • The AG went on to contrast that with the position in relation to injunctions against intermediaries. The CJEU has held that where a national court has determined content to be illegal, it is not contrary to the Article 15 prohibition on general monitoring obligations to grant an injunction in respect of equivalent files (i.e., in the AG’s understanding, those that use the protected work in the same way).  [220], [221] 
  • In that situation the measures must still be proportionate. It does not mean that rightsholders should be able to apply for any injunction against any intermediary service provider. In some cases a provider might be too far removed from the infringements for it to be proportionate to grant an injunction. That was not the case with YouTube and Cyando in the instant cases. [215] 
Proportionality also means that the injunction must not create obstacles to legal use of the service. Its purpose or effect cannot be to prevent users uploading legal content and making legal use of the work (such as, in the case of copyright, criticism, review or parody). [222]

[Amended 28 July 2020 to correct the number of paragraphs in the AG Opinion from 255 to 256. Unaccountably I didn't count the Conclusion...; and 29 July 2020 to eliminate a repetitious sentence.] 

Wednesday, 24 June 2020

Online Harms Revisited

When I first read the Online Harms White Paper in April last year, John Naughton in the Observer quoted my comment that if the road to hell was paved with good intentions, then this was a motorway; to which he riposted that the longest journey begins with a single step and the White Paper was it. Fourteen months down the road, has anything altered my view?
There were many reasons for thinking that at the time. To pick some:
  1. Undefined harm Not only would the proposed duty of care include lawful but harmful speech, but there was no attempt to define what might be meant by harm.
  2. Subjectivity In the context of speech, harm is subjective. That provides the regulator with virtually unlimited discretion to decide what should be regarded as harmful.
  3. Democratic deficit The regulator would, in effect, be tasked with constructing a parallel set of rules about individual speech. That would supplant the statute book – with all its carefully debated, constructed and balanced provisions about offline and online speech. If you delegate an all-encompassing law of everything to the discretion of a regulator, inevitably the regulator will end up making decisions on matters of principle that ought to be the sole preserve of Parliament.
  4. Rule of law Regulation by fiat of a discretionary regulator challenges the rule of law. Vagueness and unfettered discretion may provide flexibility, but they offend against the legality principle – the requirement of clarity and certainty in the rules that govern us. That principle is long established both as part of the European Convention on Human Rights and, for centuries before that, as domestic English law. Breach of the legality principle is especially objectionable where a foundational right such as freedom of speech is concerned.
  5. Online-offline equivalence In the offline world, safety-related duties of care applied – with good reason - only to objectively ascertainable harm such as personal injury. In relation to visitors (and online users are the direct equivalent of visitors to offline premises) such duties of care hardly ever apply in respect of injury caused by one visitor to another; and never in respect of what visitors say to each other.
    It is a fiction to suppose that the proposed online harms legislation would translate existing offline duties of care into an equivalent duty online. The government has taken an offline duty of care vehicle, stripped out its limiting controls and safety features, and now plans to set it loose in an environment – governance of individual speech - to which it is entirely unfitted.
  6. Who is being regulated? The duty of care scheme is presented as being about regulating platforms. But that is not the truth of it. It is we individual users whose speech will be regulated.  It is we users who will be liable to have our speech suppressed by online intermediaries acting as a posse of co-opted deputies of an online sheriff – a sheriff equipped with the power to write its own laws.
  7. Broadcast-style regulation is the exception, not the norm. In domestic UK legislation it has never been thought appropriate, either offline or online, to subject individual speech to the control of a broadcast-style discretionary regulator. That is true for the internet as in any other medium.

That was April last year. Since then we have seen the government’s Initial Consultation Response in February this year; and recently several Ministers have appeared before select committees.  What has changed?
In respect of undefined harm, nothing. The impression that harm is whatever the regulator decides it is was reinforced in one of the recent Select Committee hearings, when the DCMS Minister said:

“We want to make sure that this piece of legislation will be agile and able to respond to harms as they emerge. The legislation will make that clearer, but it will be for the regulator to outline what the harms are and to do that in partnership with the platforms.” (Caroline Dinenage, Home Affairs Committee, 13 May 2020)

The main development in the Initial Consultation Response was the shift to a differentiated duty of care. This, we are told, means that for lawful but harmful content seen by adults, intermediaries will be free to set content standards in their terms and conditions. The interest of the regulator will be in ensuring that the T&Cs are enforced transparently and consistently – and perhaps effectively, depending on which section of the Initial Response you read.
Similarly, we are told that the regulator will be concerned with systems and processes to deal with online harms, not requiring the removal of specific pieces of legal content.
But is this really all that it seems? If effectiveness is a criterion, what does that mean? Is it about effectiveness in reducing harm? If so, we are back to that being based on the regulator’s view of what constitutes harm.
Nor can we ignore Ministers' apparent enthusiasm for influencing platforms as to what should be in their terms and conditions and what their algorithms should - or at least should not - be doing; all of which is evident in the recent Committee hearings. I am very much afraid that this professed shift towards a differentiated duty of care is not quite what it might seem.
Of course, we will be assured that the legislation will be used sensibly and proportionately. And, no doubt, the regulator will be required to have regard to the fundamental right of freedom of expression. But that doesn’t really cut it. You cannot cure a hole in the heart by the statutory equivalent of exhorting the patient to get better.
Let us take an example. In the recent Home Affairs Committee session discussing 5G conspiracy theories the Home Office Lords Minister had said that 5G disinformation could be divided into “harmless conspiracy theories” and “that which actually leads to attacks on engineers”.
A Committee member challenged this. She said she did not think that any element of the conspiracy theory could be categorised as ‘harmless’, because - and this is the important bit -  “it is threatening public confidence in the 5G roll-out”. Then, to my astonishment if no-else’s, the DCMS Minister agreed.
Pausing for a moment, the harm that is being identified here is people changing their opinion about the benefit of a telecommunications project.
On that basis adverse opinions about HS2, about the Heathrow 3rd runway, about any major public project, could be deemed harmful on the basis that the opinions were misinformed.
Finding ourselves in that kind of territory is, unfortunately, the inevitable result of the government’s refusal to define harm. Where speech is concerned, undefined harm is infinitely subjective and infinitely malleable.
It is easy to respond to objections of principle by saying: but children, terrorism, misinformation, cyberbullying, racism, harassment, revenge porn, abuse, and everything else in the long list of ills that are laid at the door of the internet and social media. These are undoubtedly serious matters. But none of them relieves a government of the responsibility of devising policies and legislation in a fashion that pays proper regard to constitutional principles forged over centuries of respect for the rule of law.
There are times when, engaging in this kind of commentary, one begins to doubt one’s own perspective. Hardly anyone seems to be listening, the government ploughs on regardless and gives no sign of acknowledging that these issues even exist, never mind addressing them.
But last week, an event occurred that restores some belief that no, we have not been catapulted into some anti-matter universe in which the fundamental principles of how to legislate for individual speech have been turned on their head.
The French Constitutional Council struck down large parts of the French hate speech law on grounds that refer to many of the same objections of principle that have been levelled at the UK government’s Online Harms proposals: vagueness; subjectivity; administrative discretion; lack of due process; prior restraint; chilling effect; and others.
The French law is concerned with platform obligations in relation to illegality. Such objections apply all the more to the UK government’s proposals, since they extend beyond illegality to lawful but harmful material.
The vulnerability of the UK proposals is no accident. It stems directly from the foundational design principles that underpin it: the flawed concept of an unbounded universal duty of care in relation to undefined harm.
Heading down the “law of everything” road was always going to land the government in the morass of complexity and arbitrariness in which it now finds itself. One of the core precepts of the White Paper is imposing safety by design obligations on intermediaries. But if anything is unsafe by design, it is this legislation.
[This post is based on a panel presentation to the Westminster e-Forum event on Online Regulation on 23 June 2020. A compilation of Cyberleagle posts on Online Harms is maintained here.] 

Saturday, 20 June 2020

Online Harms and the Legality Principle

The government’s Online Harms proposals, seeking to impose obligations on online intermediaries to suppress and inhibit some kinds of content posted by users, have been dogged from the outset by questionable compatibility with the rule of law. The situation has not been improved by the government’s response to its White Paper consultation and a recent round of Ministerial select committee appearances.

The rule of law issue is that a restriction on freedom must take its authority from something that can properly be described as law. Not every state edict will do, even if it has passed through appropriate legislative procedures.

This is the legality principle.  Pertinently, the French Constitutional Council has recently held unconstitutional — in some respects for breach of the legality principle — large parts of the new French hate speech legislation (the loi Avia), which imposes content removal obligations on online intermediaries.

The European Convention on Human Rights articulates the legality principle in terms that a restriction on a Convention right must be ‘prescribed by law’, or be ‘in accordance with the law’.

The legality principle is also part of the common law tradition. Lord Sumption observed last year in Re Gallagher (as to which, more below) that the principle goes back at least as far as the American founding father John Adams: “a government of laws and not of men”. (So those of you who may be sharpening your knives at the mention of the European Convention, put them away.)  

That was echoed by Lady Hale in the same case:
“The foundation of the principle of legality is the rule of law itself - that people are to be governed by laws not men. They must not be subjected to the arbitrary - that is, the unprincipled, whimsical or inconsistent - decisions of those in power.”

The legality principle has two aspects. The first is that the law be publicly accessible. The second — the aspect that concerns us here — is that something that purports to be law must have the quality of law: it must be possible for someone to foresee, with reasonable certainty, whether their contemplated conduct is liable to be affected by the law or not.

Lord Diplock, in a 1975 case, described the principle as constitutional:
"The acceptance of the rule of law as a constitutional principle requires that a citizen, before committing himself to any course of action, should be able to know in advance what are the legal consequences that will flow from it." (Black-Clawson)   

The proposed Online Harms legislation falls squarely within that principle, since internet users are liable to have their posts, tweets, online reviews and every other kind of public or semi-public communication interfered with by the platform to which they are posting, as a result of the duty of care to which the platform would be subject. Users, under the principle of legality, must be able to able to foresee, with reasonable certainty, whether the intermediary would be legally obliged to interfere with what they are about to say online.  

Legislation may fall foul of the legality principle in two main ways: impermissible vagueness or excessive discretionary power.  Lady Hale in Re Gallagher again:
“The law will not be sufficiently predictable if it is too broad, too imprecise or confers an unfettered discretion on those in power.” [73]

A simple criminal offence of ‘behaving badly’ would fail the legality test for impermissible vagueness — since no-one could predict in advance with any, let alone reasonable, certainty whether their behaviour would be regarded as criminal. 

Vagueness was a ground on which, among others, the French Constitutional Council decided that an aspect of the French loi Avia contributed to unconstitutionality. The provision in question specified that an intentional breach of the intermediary’s obligation could arise from the absence of a proportionate and necessary examination of notified content. That was not expressed in terms that enabled the scope of liability to be determined. Although supporting a finding of unconstitutionality by reason of lack of necessity and proportionality, that is the kind of analysis that is also relevant to legality. 

The second way of failing the legality test is when legislation confers excessive discretionary power on a state official or body.  A purely discretionary power of ad hoc command does not suffice. Lord Sumption in Re Gallagher said:
“The measure must not therefore confer a discretion so broad that its scope is in practice dependent on the will of those who apply it, rather than on the law itself. Nor should it be couched in terms so vague or so general as to produce substantially the same effect in practice.”

A discretionary power exercised on defined principles, and (if necessary) accompanied by safeguards against abuse, is capable of satisfying the legality test if the result is to render the effect of the power sufficiently foreseeable. Lord Sumption again:
“Thus a power whose exercise is dependent on the judgment of an official as to when, in what circumstances or against whom to apply it, must be sufficiently constrained by some legal rule governing the principles on which that decision is made.”

Underlying both kinds of failure to satisfy the legality test is arbitrariness. Exercise of an unfettered discretionary power is inherently arbitrary, since officials may wield the power as they please.
“an excessively broad discretion in the application of a measure infringing the right of privacy is likely to amount to an exercise of power unconstrained by law. It cannot therefore be in accordance with the law unless there are sufficient safeguards, exercised on known legal principles, against the arbitrary exercise of that discretion, so as to make its exercise reasonably foreseeable.” (Re Gallagher at [31])

English courts have interpreted the European Court of Human Rights caselaw as requiring safeguards to have the effect of enabling the proportionality of the interference to be adequately examined. (R(T) [118], Re Gallagher) [36] and [39]

Arbitrariness is also the vice of an impermissibly vague law:
"Vagueness offends several important values … A vague law impermissibly delegates basic policy matters to policemen, judges and juries for resolution on an ad hoc and subjective basis, with the attendant dangers of arbitrary and discriminatory application." (Rv Rimmington (House of Lords) citing the US case of Grayned)

“the statements in R(T) about the need for safeguards against “arbitrary” interference …  [refer] to safeguards essential to the rule of law because they protect against the abuse of imprecise rules or unfettered discretionary powers.” (Re Gallagher at [41]

The recent French Constitutional Council decision provides an example of this kind of assessment, albeit in the context of necessity and proportionality rather than the legality principle. The loi Avia empowers an administrative authority to require a host to remove certain terrorist or child pornography content within one hour. This was objectionable because determination of illegality was in the sole opinion of the administrative authority and did not rest on the manifest character of the content. Nor was there any opportunity for the host to obtain a judicial ruling.

The UK online harms project is vulnerable for both reasons: impermissible vagueness and discretionary power amounting to ad hoc command. It is potentially more exposed to challenge than the French law, given that it extends beyond illegality to lawful but harmful material and activity.

The main vagueness objection to the Online Harms proposals stems from the apparent determination of the government to leave the concept of harm undefined. The vagueness inherent in undefined harm is discussed here.  

The White Paper and the Initial Response gave the impression that the government would leave it to the regulator to decide what is harmful — an impression recently reinforced by a DCMS Minister:
“We want to make sure that this piece of legislation will be agile and able to respond to harms as they emerge. The legislation will make that clearer, but it will be for the regulator to outline what the harms are and to do that in partnership with the platforms.” (Caroline Dinenage, Home Affairs Committee, 13 May 2020)

Any proposed legislation is likely to incorporate some attempt to articulate principles on the basis of which the discretionary power is to be exercised and safeguards intended to protect against abuse of the power. What form those principles and safeguards might take, and whether they would be capable of remedying the intrinsic legality problem, are open questions.

The government would no doubt be tempted to address such issues by including statutory obligations on the regulator, for instance to have regard to the fundamental right of freedom of expression and to act proportionately. That may be better than nothing. But can a congenital defect in legislation really be cured by the statutory equivalent of exhorting the patient to get better? It is akin to putting a sticking plaster on a hole in the heart.

The ECtHR caselaw consistently emphasises the need for clear and precise rules, for discretionary powers to be clear in scope and for safeguards to be clearly and precisely expressed.

Could Codes of Practice issued by a regulator remedy a lack of clarity? In principle that cannot be ruled out — that kind of gap-filling has been effective in the context of surveillance powers — but would Codes of Practice in this area amount to more than high level principles and a collection of ad-hoc examples? Even if they formed a coherent set of concrete rules, the regulator’s views about harm would sit alongside, and effectively supplant, the existing, carefully crafted, set of laws governing the speech of individuals.

In the context of rules about speech, that amounts to accepting wholesale delegation of lawmaking power from Parliament to a regulator, in respect of what is often regarded as the most foundational right.

As the French Constitutional Council observed:
“[F]reedom of expression and communication is all the more precious since its exercise is a condition of democracy and one of the guarantees of respect for other rights and freedoms.”

The French Constitutional Council decision is a salutary reminder that fundamental rights issues are not the sole preserve of free speech purists, nor mere legal pedantry to be brushed aside in the eagerness to do something about the internet and social media. These questions go to the heart of the legitimacy of the government’s proposals.

[Amended 22 June 2020 to make clear that the French Constitutional Council analyses were in the context of findings of lack of necessity and proportionality.] 

Sunday, 24 May 2020

A Tale of Two Committees

Two Commons Committees –the Home Affairs Committee and the Digital, Culture, Media and Sport Committee – have recently held evidence sessions with government Ministers discussing, among other things, the government’s proposed Online Harms legislation. These sessions proved to be as revealing, if not more so, about the government’s intentions as its February 2020 Initial Response to the White Paper.

As a result on some topics we know more than we did, but the picture is still incomplete. Some new issues have surfaced. Other areas have become less clear than they were previously.

Above all, nothing is set in stone. The Initial Response was said to be indicative of a direction of travel and to form an iterative part of a process of policy development. The destination has yet to be reached – if, that is, the government ever gets there at all. It may yet hit a road block somewhere along the way, veer off into a ditch, or perhaps undergo a Damascene conversion should it finally realise the unwisdom of creating a latter-day Lord Chamberlain for the internet. Or the road may eventually peter out into nothingness. At present, however, the government is pressing ahead with its legislative intentions.

I’m going to be selective about my choice of topics, in the main returning to some of the key existing questions and concerns about the Online Harms proposals, with a sprinkling of new issues added for good measure. Much more ground than this was covered in the two sessions.

Borrowing from the old parlour game, each topic starts with what the White Paper said; followed by what the Initial Response said; then what the Ministers said; and lastly, the Consequence. The Ministers are Oliver Dowden MP (Secretary of State for Digital, Culture, Media and Sport); Caroline Dinenage MP (Minister for Digital and Culture) and Baroness Williams (Lords Minister, Home Office).  

Sometimes the government’s Initial Response to Consultation recorded consultation submissions, but came to no conclusion on the topic. In those instances the Initial Response is categorised as saying ‘Nothing’. Some repetitive statements have been pruned.

Since this is a long read, here is a list of the selected topics:

1. Will Parliament or the regulator decide what “harm” means?

The White Paper said:

“… government action to tackle online content or activity that harms individual users, particularly children, or threatens our way of life in the UK, either by undermining national security, or by reducing trust and undermining our shared rights, responsibilities and opportunities to foster integration.”

“This list [Table 1, Online harms in scope] is, by design, neither exhaustive nor fixed. A static list could prevent swift regulatory action to address new forms of online harm, new technologies, content and new online activities.”

The Initial Response said:


The Ministers said:

Oliver Dowden: “The only point that I have tried to make is that I am just keen on this proportionality point because it is often the case that regulation that starts out with the best of intentions can, in its interpretation if you do not get it right, have a life of its own. It starts to get interpreted in a way that Parliament did not intend it to be in the first place. I am just keen to make sure we put those kinds of hard walls around it so that the regime is flexible but that in its interpretation it cannot go beyond the intent that we set out in the first place in the broad principles.” (emphasis added)

Caroline Dinenage: “For what you might call the “legal but harmful” harms, we are not setting out to name them in the legislation. That is for the simple reason that technology moves on at such a rapid pace that it is very likely that we would end up excluding something….  We want to make sure that this piece of legislation will be agile and able to respond to harms as they emerge. The legislation will make that clearer, but it will be for the regulator to outline what the harms are and to do that in partnership with the platforms.” (Q.554) (emphasis added)

The Consequence: It is difficult to reconcile the desire of the Secretary of State to erect “hard walls”, in order to avoid unintended consequences, with the government’s apparent determination to leave the notion of harm undefined, delegating to the regulator the task of deciding what counts as harmful. This kind of approach has serious implications for the rule of law.

Left undelineated, the concept of harm is infinitely malleable. The Home Office Minister Baroness Williams suggested in the Committee session that 5G disinformation could be divided into “harmless conspiracy theories” and “that which actually leads to attacks on engineers”, as well as a far-right element. One Committee member (Ruth Edwards M.P.) responded that she did not think that any element of the conspiracy theory could be categorised as ‘harmless’, because “it is threatening public confidence in the 5G roll-out” — a proposition with which the DCMS Minister Caroline Dinenage agreed.

Harm is thus equated with people changing their opinion about a telecommunications project. This unbounded sense of harm is on a level with the notorious “confusing our understanding of what is happening in the wider world” phraseology of the White Paper.  

Statements such as the concluding peroration by Baroness Williams: “I, too, want to make the internet a safer place for my children, and exclude those who seek to do society harm” have to be viewed against the backdrop of an essentially unconstrained meaning of harm.

When harm can be interpreted so broadly, the government is playing with fire. But it is we  not the government, the regulator or the tech companies  who stand to get our fingers burnt.

2. The regulator’s remit: substance, process or both?

The White Paper said:

“In particular, companies will be required to ensure that they have effective and proportionate processes and governance in place to reduce the risk of illegal and harmful activity on their platforms, as well as to take appropriate and proportionate action when issues arise. The new regulatory regime will also ensure effective oversight of the take-down of illegal content, and will introduce specific monitoring requirements for tightly defined categories of illegal content.” (6.16)

The Initial Response said:

“The approach will be proportionate and risk-based with the duty of care designed to ensure companies have appropriate systems and processes in place to improve the safety of their users.”

“The focus on robust processes and systems rather than individual pieces of content means it will remain effective even as new harms emerge. It will also ensure that service providers develop, clearly communicate and enforce their own thresholds for harmful but legal content.

“The kind of processes the codes of practice will focus on are systems, procedures, technologies and investment, including in staffing, training and support of human moderators.”

“As such, the codes of practice will contain guidance on, for example, what steps companies should take to ensure products and services are safe by design or deliver prompt action on harmful content or activity.”

“Rather than requiring the removal of specific pieces of legal content, regulation will focus on the wider systems and processes that platforms have in place to deal with online harms, while maintaining a proportionate and risk-based approach.”

“In fact, the new regulatory framework will not require the removal of specific pieces of legal content. Instead, it will focus on the wider systems and processes that platforms have in place to deal with online harms, while maintaining a proportionate and risk-based approach.”

“Of course, companies will be required to take particularly robust action to tackle terrorist content and online Child Sexual Exploitation and Abuse. The new regulatory framework will not remove companies’ existing duty to remove illegal content.”

The Ministers said:

Caroline Dinenage: “the codes of practice are really about systems and processes, rather than naming individual harms in the legislation. There are two exceptions to that: there will be codes of practice around child sexual exploitation and terrorist content, because those are both illegal.” (Q554)

“It is for the regulator to set out codes of practice, but they won’t be around individual harms; they will be around systems and processes—what we expect the companies to do. Rather than focusing on individual harms, because we know that the technology moves on so quickly that there could be more, it is a case of setting out the systems and processes that we would expect companies to abide by, and then giving the regulator the opportunity to impose sanctions on those that are not doing so.” (Q.556)

Q562 Stuart C. McDonald: “…if the regulator feels that algorithms are working inappropriately and directing people who have made innocent searches to, say, far-right content, will they be able to order, essentially, the company to make changes to how its algorithms are operating?

Caroline Dinenage: Yes, I think that they will. That is clearly something that we will set out in the full response. The key here is that companies must have clear transparency, they must set out clear standards, and they must have a clear duty of care. If they are designing algorithms that in any way put people at risk, that is, as I say, a clear design choice, and that choice carries with it a great deal of responsibility. It will be for the regulator to oversee that responsibility. If they have any concerns about the way that that is being upheld, there are sanctions that they can impose.”

The Consequence: As with the specific issue around the status of terms and conditions for “lawful but harmful” content (see below), it is difficult to see how a bright line can be drawn between substance and process.  Processes cannot be designed, risk-assessed or their effectiveness evaluated in the abstract — only by reference to goals such as improving user safety and reducing risk of harm. A duty of care evaluated without reference to the kind of harm intended to be guarded against makes no more sense than the smile without the Cheshire Cat. 

In Caparo v Dickman Lord Bridge cautioned against discussing duties of care in the abstract:
"It is never sufficient to ask simply whether A owes B a duty of care. It always necessary to determine the scope of the duty by reference to the kind of damage from which A must take care to save B harmless."

Risk assessment is familiar in the realm of safety properly so-called: danger of physical injury, where there is a clear understanding of what constitutes objectively ascertainable harm. It breaks down when applied to undefined, inherently subjective harms arising from users' speech. If "threatening public confidence in the 5G roll-out” (see above) can be labelled an online harm within scope of the legislation, that goes far beyond any tenable concept of safety.

The government’s approach appears to be to adopt different approaches to illegal and “legal but harmful”, the latter avowedly restricted to process (although see next topic as to how far that can really be the case). 

In passing, the Initial Response is technically incorrect in referring to “companies’ existing duty to remove illegal content”. No such general duty exists. Hosting providers lose the protection of the ECommerce Directive liability shield if they do not remove unlawful content expeditiously upon gaining actual or (for damages) constructive knowledge of the illegality. Even then, the eCommerce Directive does not oblige them to remove it. The consequence is that they become exposed to the risk of possible liability (which may or may not exist) under the relevant underlying law (see here for a fuller explanation). In practice that regime strongly incentivises hosting providers to remove illegal content upon gaining relevant knowledge. But they have no general legal obligation to do so.

3. For “lawful but harmful” content seen by adults, will the regulator be interested only in whether intermediaries are enforcing whatever content standards they choose to put in their TandCs?

The White Paper said:

“As indication of their compliance with their overarching duty of care to keep users safe, we envisage that, where relevant, companies in scope will:

  • Ensure their relevant terms and conditions meet standards set by the regulator and reflect the codes of practice as appropriate.
  • Enforce their own relevant terms and conditions effectively and consistently. …”
“To help achieve these outcomes, we expect the regulator to develop codes of practice that set out: 

  • Steps to ensure products and services are safe by design.
  • Guidance about how to ensure terms of use are adequate and are understood by users when they sign up to use the service. …
  • Steps to ensure harmful content or activity is dealt with rapidly. …
  • Steps to monitor, evaluate and improve the effectiveness of their processes.”
The Initial Response said:

“We will not prevent adults from accessing or posting legal content, nor require companies to remove specific pieces of legal content. The new regulatory framework will instead require companies, where relevant, to explicitly state what content and behaviour is acceptable on their sites and then for platforms to enforce this consistently.”

“To ensure protections for freedom of expression, regulation will establish differentiated expectations on companies for illegal content and activity, versus conduct that is not illegal but has the potential to cause harm. Regulation will therefore not force companies to remove specific pieces of legal content. The new regulatory framework will instead require companies, where relevant, to explicitly state what content and behaviour they deem to be acceptable on their sites and enforce this consistently and transparently. All companies in scope will need to ensure a higher level of protection for children, and take reasonable steps to protect them from inappropriate or harmful content.”

“Recognising concerns about freedom of expression, the regulator will not investigate or adjudicate on individual complaints. Companies will be able to decide what type of legal content or behaviour is acceptable on their services, but must take reasonable steps to protect children from harm. They will need to set this out in clear and accessible terms and conditions and enforce these effectively, consistently and transparently.”

The Ministers said:

Oliver Dowden: “The essence of online harms legislation is holding social media companies to what they have promised to do and to their own terms and conditions. My focus in respect of those is principally on two things: underage harms and illegal harms. Clearly, the trickiest category is legal adult harms. In respect of that, we are looking at how we tighten the measures to ensure that those companies actually do what they promised they would do in the first place, which often is not the case.” (Q20) (emphasis added)

“Clearly, in respect of legal adult harms, that is the underlying principle anyway in the sense that what we are really trying to do is say to those social media companies and tech firms, “Be true to what you say you are doing. Just stick by your terms and conditions”. We would ask the regulator to make sure that it is enforcing them, and then have tools at our disposal to require it to do so.” (Q89) (emphasis added)

Caroline Dinenage: “A lot of this is about companies having the right regulations and standards and duty of care, and that will also be in the online harms Bill and online harms work. If we can have more transparency as to what platforms regard as acceptable—there will be a regulator that will help guide them in that process—I think we will have a much better opportunity to tackle those things head-on.” (Q513) (emphasis added)

“With regard to our role in DCMS, it is more as a co-ordinator bringing together the work of all the different Government Departments and then liaising directly with the platforms to make sure that their standards, their regulations, are reflective of some of the concerns that we have—make sure, in some cases, that harmful content can be anticipated and therefore prevented, and, where that is not possible, where it can be stopped and removed as quickly as possible.” (emphasis added) (Q525)

Baroness Williams: “There is obviously that which is illegal and that which breaches the CSPs’ terms of use. It is that latter element, particularly in the area of extremism, on which we have really tried to engage with CSPs to get them to be more proactive.” (emphasis added) (emphasis added) (Q.527)

The Consequence: This is now one of the most puzzling areas of the government’s developing policy. The White Paper expected that codes of practice would ensure that terms and conditions meet “standards set by the regulator” and that terms of use are “adequate”. These statements were not on the face of them limited to procedural standards and adequacy. They could readily be interpreted as encompassing standards and adequacy judged by reference to harm reduction goals determined by the regulator (which, as we have seen, would be able to decide for itself what constitutes harm) – in other words, extending to the substantive content of intermediaries' terms and conditions.

When the Initial Response was published, great play was made of the shift to a differentiated duty of care: that it would be up to the intermediary to decide – for lawful content for adults - what standards to put in its terms and conditions. 

The remit of the regulator would be limited to ensuring those standards are clearly stated and enforced “consistently and transparently” (or “effectively, consistently and transparently”, depending on which part of the Initial Response you turn to; or “effectively and consistently”, according to the White Paper). Indeed the Secretary of State said in evidence that "The essence of online harms legislation is holding social media companies to what they have promised to do and to their own terms and conditions

But it seems from the other Ministers’ responses that the government has not disclaimed all interest in the substantive content of intermediaries’ terms and conditions. On the contrary, the government evidently sees it as part of its role to influence (to put it at its lowest) what goes into them. If the regulator’s task is to ensure enforcement of terms and conditions whose substantive content reflects the wishes of a government department, that is a far cry from the proclaimed freedom of intermediaries to set their own standards of acceptable lawful content.

Ultimately, what can be the point of emphasising how, in the name of upholding freedom of speech, the role of an independent regulator will be limited to enforcing the intermediaries’ own terms and conditions, if the government considers that part of its own role is to influence those intermediaries as to what substantive provisions those TandCs should contain?

This is one aspect of an emerging issue about division of responsibility between government and the regulator. It is tempting to think that once an independent regulator is established the government itself will withdraw from the fray. But if that is not so, then reducing the remit of the independent regulator concomitantly increases the scope for the government itself to step in.

That is especially pertinent in the light of the government’s desire to cast itself as a ‘trusted flagger’, whose notifications of unlawful content the intermediaries should act upon without question. Thus Caroline Dinenage appears to regard the platforms as obliged to remove anything that the government has told them it considers to be illegal (with no apparent requirement of prior due process such as independent verification), and would like them to take seriously anything else that the government notifies to them:

“We have found that we have become—I forget the proper term, but we have become like a trusted flagger with a number of the online hosting companies, with the platforms. So when we flag information, they do not have to double-check the concerns we have. Clearly, unless something is illegal, we cannot tell organisations to take it down; they have to make their own decision based on their own consciences, standards and requirements. But clearly we are building up a very strong, trusted relationship with them to ensure that when we flag things, they take it seriously.” (Emphasis added)

4. Codes of Practice for specific kinds of user content or activity?

The White Paper said:

“[T]he White Paper sets out high-level expectations of companies, including some specific expectations in relation to certain harms. We expect the regulator to reflect these in future codes of practice.”

It then set out a list of 11 harms, accompanied in each case by a list of areas in relation to that harm that it expected the regulator to include in a code of practice. For instance, in relation to disinformation a list of 11 specific areas included:

“Steps that companies should take in relation to users who deliberately misrepresent their identity to spread and strengthen disinformation.”; and

“Promoting diverse news content, countering the ‘echo chamber’ in which people are only exposed to information which reinforces their existing views.”

The Initial Response said:

“The White Paper talked about the different codes of practice that the regulator will issue to outline the processes that companies need to adopt to help demonstrate that they have fulfilled their duty of care to their users. … We do not expect there to be a code of practice for each category of harmful content, however, we intend to publish interim codes of practice on how to tackle online terrorist and Child Sexual Exploitation and Abuse (CSEA) content and activity in the coming months.”

The Ministers said:

Caroline Dinenage: I think I need to clear up a bit of a misunderstanding about the White Paper. The 11 harms that were listed were really intended to be an illustrative list of what we saw as the harms. The response did not expect a code of practice for each one, because the codes of practice are really about systems and processes, rather than naming individual harms in the legislation. There are two exceptions to that: there will be codes of practice around child sexual exploitation and terrorist content, because those are both illegal.” (Q.554) (emphasis added)

The Consequence: The different approach to CSEA and terrorism probably owes more to the different areas of responsibility of the Home Office and the DCMS than to any dividing line between illegality and non-illegality. The White Paper covers many more areas of illegality than those two alone.

5. Search engines in scope?

The White Paper said:

“… will apply to companies that allow users to share or discover user-generated content, or interact with each other online.” (emphasis added)

“These services are offered by…  search engines” (Executive Summary)

The Initial Response said:

“The legislation will only apply to companies that provide services or use functionality on their websites which facilitate the sharing of user generated content or user interactions, for example though comments, forums or video sharing” (emphasis added)

The Ministers said:

Caroline Dinenage: Again, we are probably victims of the fact that we published an interim response, which was not as comprehensive as our full response will be later on in the year. The White Paper made it very clear that search engines would be included in the scope of the framework and the nature of the requirements will reflect the type of service that they offer. We did not explicitly mention it in the interim response, but that does not mean that anything has changed. It did not cover the full policy. Search engines will be included and there is no change to our thoughts and our policy on that.”   (Q.560)

The Consequence: Notwithstanding the Minister’s explanation, the alterations in wording between the White Paper and the Initial Response (omitting “discover”, adding “only”) had the appearance of a considered change. The lesson for the future is perhaps that it would be unwise to parse too closely the text of anything else said or written by the government.

6. Everything from social media platforms to retail customer review sections?

The White Paper said:

“… companies of all sizes will be in scope of the regulatory framework. The scope will include… social media companies, public discussion forums, retailers that allow users to review products online, along with non-profit organisations, file sharing sites and cloud hosting providers.” (emphasis added)

The Initial Response said:

“To be in scope, a business would have to operate its own website with the functionality to enable sharing of user-generated content, or user interactions.”

The Ministers said:

Oliver Dowden: “We are a Europe leader in this. I have seen, as I am sure you have seen, the unintended consequences of good-intended legislation then having bureaucratic implications and costs on businesses that we want to avoid.

For example, in respect of legal online harms for adults, if you are an SME retailer and you have a review site on your website for your product and people can put comments underneath that, that is a form of social media. Notionally, that would be covered by the online harms regime as it stands. The response to that is they will go through this quick test and then they will find it does not apply to them. My whole experience of that for SMEs and others is that it is all very well saying that when you are sat have no idea what this online harms thing is, this potentially puts a big administrative burden on you. (emphasis added)

Are there ways in which we can carve out those sorts of areas so we focus on where we need to do it? Those kinds of arguments pertain less to illegal harms and harms to children. I hope that gives you a flavour of it.” (Q.88)

Q89 Damian Hinds: “Yes, quite so. I think in the previous announcement there was quite a high estimate of the number of firms or proportion of total firms that would somehow be counted in the definition of an online platform, which was rather a disturbing thought. It would be very welcome, what you can do to limit the scope of who counts as a social media platform.”

The Consequence: This exchange does shine a light on the expansive scope of the proposed legislation. The Secretary of State said that SME retailers with review sections were “notionally” covered. However, there was nothing notional about it.  Retailer review sections were expressly included in the White Paper, as were companies of all sizes.

As the Secretary of State suggests, it is little comfort for an SME to be told “don’t worry, you’ll be low risk so it won’t really apply to you” if: (a) you are in scope on the face of it, and (b) it is left to the regulator to decide whether the duty of care should bear less heavily on some intermediaries than others. 

There are, of course, many other kinds of non-social media platform intermediary who are in scope as well as SME retailers with review sections: apps, online games, community discussion forums, non-profits and many other online services.  The Initial Response said “Analysis so far suggests that fewer than 5% of UK businesses will be in scope of this regulatory framework.” Whether 5% is considered to be small or large in absolute terms (not to mention the apparent indifference to non-UK businesses), there has been no indication of the assumptions underlying that estimate.

7. Will journalism and the press be excluded from scope?

The White Paper said:

Nothing. In a subsequent letter to the Society of Editors the then DCMS Secretary of State Jeremy Wright said:

“… as I made clear at the White Paper launch and in the House of Commons, where these services are already well regulated, as IPSO and IMPRESS do regarding their members' moderated comment sections, we will not duplicate those efforts. Journalistic or editorial content will not be affected by the regulatory framework.”

The Initial Response said:

Nothing. It limited itself to general expressions of support for freedom of expression, such as:
“…freedom of expression, and the role of a free press, is vital to a healthy democracy. We will ensure that there are safeguards in the legislation, so companies and the new regulator have a clear responsibility to protect users’ rights online, including freedom of expression and the need to maintain a vibrant and diverse public square.”

The Ministers said:

Caroline Dinenage: Obviously, we know that a free press is one of the pillars of our society, and the White Paper, I must say from the outset, is not seeking to prohibit press freedom at all, so journalistic and editorial content is not in the scope of the White Paper. Our stance on press regulation has not changed.” (Emphasis added)

“As for what has been in the papers recently, the Secretary of State wrote a letter to the Society of Editors, and this was about what you might call the below-the-line or comments section. They were concerned that that might be regulated. I think what the Secretary of State is saying is that, where there is already clear and effective moderation of that sort of content, we do not intend to duplicate it. For example, there is IPSO and IMPRESS activity on moderated content sections. Those are the technical words for it. This is still an ongoing conversation, so we are working at the moment with stakeholders to develop proposals on how we are going to reflect that in legislation, working around those parameters. (Q.558)

“Stuart C. McDonald: But there is no suggestion that below-the-line remains unregulated. It is where that regulation should lie that is the issue.

Caroline Dinenage: Exactly.” (Q.559)

The Consequence: There are three distinct issues around inclusion or exclusion of the press from the regulatory scope of the Bill:

1. User comments on newspaper websites.  On the face of it, news organisations would be subject to the duty of care as regards user comments on their websites. The position of the government appears to be that whether the duty of care would apply would depend on whether the comments are already subject to another kind of regulation (or at least the existence of “clear and effective moderation”). Potentially, therefore, newspapers that are not regulated by IPSO or IMPRESS would be in scope for this purpose. Whether this demarcation would be achieved by a hard scope exclusion written into the Bill is not clear.

2. Journalistic or editorial material. Whilst the Minister may say that the government’s stance on press regulation has not changed, her statement that journalistic and editorial content is not “in the scope” of the White Paper is new — at least if we are to understand that as meaning that the Bill would contain a hard scope exclusion for journalistic or editorial content. Previously the government had said only that such content would not be affected by the regulatory framework. A general exclusion of journalistic or editorial material would on the face of it go much wider than newspapers and similar publications. It would be no surprise to find this statement being “clarified” at some point in the future.

3. Newspaper social media feeds and pages. Newspapers and other publications maintain their own pages, feeds and blogs on social media and other platforms. Newspapers would not themselves be subject to a duty of care in relation to their own content. But as far as the platforms are concerned the newspapers are users, so that their pages and feeds would fall under the platforms’ duty of care. As such, they would be liable to have action taken against their content by a platform in the course of fulfilling its own duty of care.

The government has said nothing about whether, and if so how, such press content would be excluded from scope. If the government is serious about excluding “journalistic or editorial” material generally from scope, that would achieve this. However that would create immense difficulties around whether a particular feed or page is or is not journalistic or editorial material (what about this Cyberleagle blog, or the Guido Fawkes blog, for instance?), and how a platform is supposed to decide whether any particular content is or is not in scope.  

8. End to end encryption

The White Paper said:

Nothing. (Although the potential for the duty of care to be applied to prevent the use of end to end encryption was evident.)

The Initial Response said:


The Ministers said:

Baroness Williams: “[Facebook] then announced that they were going to end-to-end encrypt Messenger. That, for us, is gravely worrying, because nobody will be able to see into Messenger. I know there is going to be a Five Eyes engagement next week, and I do not know if the Committee knows, but the Five Eyes wrote to Mark Zuckerberg last year, so worried were we about this development.” (Q538)

Q566 Chair: “On that basis, does end-to-end encryption count as a breach of duty of care?

Baroness Williams:It is criminal activity that would breach the duty of care. Allowing criminal activity to happen on your platform would be the breach of duty of care. End-to-end encryption, in and of itself, is not a breach of duty of care.

Chair: Presumably, for this regulation to have any bite at all, they will have to be able to take some enforcement against the policies that fail to prevent criminal activity. On that logic, introducing the end-to-end encryption, if it knowingly stops the company from preventing illegal activity—for example, the kind of online child abuse you have talked about—that would surely count as a breach of duty of care.

Baroness Williams: I fully expect that that is what some of the Five Eyes discussions, which will be happening very shortly, will look at.”

The Consequence: This is the first indication that the government is alive to the possibility that a regulator might be able to interpret a duty of care so as to affect the ability of an intermediary to use end to end encryption. The “in and of itself” phraseology used by the Minister appears not to rule that out. This issue is related to the question of how the legislation might apply to private messaging providers, a topic on which the government has consulted but has not yet published a conclusion.

9. Identity verification

The White Paper said:

“The internet can be used to harass, bully or intimidate. In many cases of harassment and other forms of abusive communications online, the offender will be unknown to the victim. In some instances, they will have taken technical steps to conceal their identity. Government and law enforcement are taking action to tackle this threat.”

“The police have a range of legal powers to identify individuals who attempt to use anonymity to escape sanctions for online abuse, where the activity is illegal. The government will work with law enforcement to review whether the current powers are sufficient to tackle anonymous abuse online.”

“Some of the areas we expect the regulator to include in a code of practice are:

  • Steps to limit anonymised users abusing their services, including harassing others. …
  • Steps companies should take to limit anonymised users using their services to abuse others.”

The Initial Response said:


The Ministers said:

Q25 John Nicolson: Would you like to see online harms legislation compel social media companies to verify the identity of users, not of course to publish them but simply to verify them before the accounts are up and running?

Oliver Dowden: There is certainly a challenge around, as you mentioned, bots, which are sometimes used by hostile state activity, and finding better ways of verifying to see whether these are genuine actors or whether it is co-ordinated bot-type activity. That is through online harms but there is obviously a national security angle to that as well.”

Q530 Ms Abbott: “Finally, would you consider changing the regulation, so you could post anonymously on a website or Twitter or Facebook, but the online platform would have your name and address? In my experience, when you try to pursue online abuse, you hit a brick wall because the abuser is not just anonymous when they post, the online platform doesn’t have a name and address either.

Caroline Dinenage: That is a really interesting idea. It is definitely something that we have been discussing. With regard to the online harms legislation that we are putting together at the moment, we have said very clearly that companies need to be much more transparent. They need to set out standards and they need to clarify what their duty of care is and to have a robust complaints procedure that people can use and can trust in. That is why we are also appointing a regulator that will set out what good looks like and will have expectations but also powers to be able to demand data and information and to be able to impose sanctions on those that they do not feel are abiding by them.

Q531 Chair: What does that actually mean? Does that mean that you think that the regulator should have the power to say that social media companies should not allow people to be … [a]nonymous to the platform?

Caroline Dinenage: This is something that we are considering at the moment. There are a number of things here. In the online harms legislation, the regulator will set out their expectations.

Chair: We can’t devolve everything to the regulator. Something like this is really important—should social media companies be allowed to not know who it is that is using their platforms? That feels like a big question that Parliament should take a view on, not something we just hand over to a regulator and say, “Okay, whatever you think,” later on.

Caroline Dinenage: Yes, exactly. That is why we are considering it at the moment, as part of the online harms legislation, and that, of course, will come before Parliament.”

Q545 Tim Loughton: “… If I want to set up a bank account and all sorts of other accounts, I must prove to the bank or organisation who I am by use of a utility bill and other things like that. It is quite straightforward. What is the downside of a similar requirement being enforced by social media platforms before you are allowed to sign up for an account? This is an issue that we have looked at before on the Committee. Many of us have suggested that we should go down that route. I gather that it already happens in South Korea. You say that you are looking at it, Minister Dinenage. What, in your view, is the downside of having such scrutiny?

Caroline Dinenage: You make a very compelling argument, Mr Loughton. A lot of what you said is extremely correct. The only thing we are mulling over and trying to cope with is whether there is any reason for anonymity for people who are victims, who want to be able to whistleblow, and who may be overseas and might not want to identify themselves because they fear for their lives or other harm. There are those issues of anonymity and protecting someone’s safety and ability to speak up. That is what we are wrestling with.

Q546 Tim Loughton: By the same token, you could have somebody with a fake identity who is falsely whistleblowing or pushing around propaganda, so it cuts both ways. I fail to see the downside of having a requirement that you have to prove who you are—not least because we know what happens when people are caught and have their sites taken down. Five minutes later, they set up another new anonymous site peddling the same sort of false information.

Caroline Dinenage: You make a very compelling argument. This is such an important piece of legislation, and we have to get it right. As I say, it is world-leading. Everybody is looking at us to see how we do it. We need to make sure that we have taken into consideration every angle, and that is what we are doing at the moment.”

The Consequence: Identity verification is evidently an issue that is bubbling to the surface. The most fundamental objection is that the right of freedom of expression secured by Article 19 of the Universal Declaration of Human Rights is not conditioned upon identity verification. It does not say:

"Everyone has the right to freedom of opinion and expression upon production of any two of the following: driving licence, passport, recent utility or council tax bill...".

In South Korea, legislation imposing online identity verification obligations was declared unconstitutional in 2012.

The Home Affairs Committee raised, to the best of my knowledge for the first time in any Parliamentary deliberation on the Online Harms project, the question of what should be decided by Parliament and what delegated to a regulator. That is not limited to the question of identity verification. It is an inherent vice of regulatory powers painted with such a broad brush that many concrete issues will lie hidden behind abstractions, to surface only when the regulator turns its light upon them – by which time it is far too late to object that the matter should have been one for Parliament to decide. That vice is compounded when the powers affect the individual speech of millions of people.

10. Extraterritoriality

The White Paper said:

“The new regulatory regime will need to handle the global nature of both the digital economy and many of the companies in scope. The law will apply to companies that provide services to UK users.” (6.9) (emphasis added)

“We are also considering options for the regulator, in certain circumstances, to require companies which are based outside the UK to appoint a UK or EEA-based nominated representative.” (6.10)

The Initial Response said:

Nothing of relevance.

The Ministers said:

“Q569: Andrew Gwynne: Presumably the regulations will apply to all content visibly available in the UK—is that correct?

Baroness Williams: Yes.”

The Consequence: Charitably, perhaps we should assume that the Minister misspoke. There is a vast difference between providing services to UK users and mere visibility in the UK. Given the inherent cross-border nature of the internet, asserting a country’s local law against content on a mere visibility basis is tantamount to asserting world-wide extra-territoriality. 

It would be more consistent with the direction in which internet jurisdictional norms have moved over the last 25 years to apply a test of whether the provider is targeting the UK.