Monday 28 December 2020

Internet legal developments to look out for in 2021

Seven years ago I started to take an annual look at what the coming year might hold for internet law in the UK. This exercise has always, perforce, included EU law. With Brexit now fully upon us future developments in EU law will no longer form part of UK law. Nevertheless, they remain potentially influential: not least, because the 2018 EU Withdrawal Act provides that UK courts may have regard to anything relevant done by the CJEU, another EU entity or the EU after 31 December. In any case I am partial to a bit of comparative law. So this survey will continue to keep significant EU law developments on its radar.

What can we expect in 2021?

Copyright

Digital Single Market
EU Member States are due to implement the Digital Copyright Directive by 7 June 2021. This includes the so-called snippet tax (the press publishers’ right) and the Article 17 rules for online sharing service providers (OSSPs). The UK is not obliged to implement the Directive and has said that it has no plans to do so. Any future changes to the UK copyright framework will be “considered as part of the usual domestic policy process”.

The Polish government’s challenge to Article 17 (Poland v Parliament and Council, Case C-401/19) is pending. Poland argues that Article 17 makes it necessary for OSSPs, in order to avoid liability, to carry out prior automatic filtering of content uploaded online by users, and therefore to introduce preventive control mechanisms. It contends that such mechanisms undermine the essence of the right to freedom of expression and information and do not comply with the requirement that limitations imposed on that right be proportionate and necessary.

Linking and communication to the public The UK case of Warner Music/Sony Music v TuneIn is due to come before the Court of Appeal early in 2021.

Pending CJEU copyright cases Several copyright references are pending before the EU Court of Justice.

The YouTube and Uploaded cases (C-682/18 Peterson v YouTube and C-683/18 Elsevier v Cyando) referred from the German Federal Supreme Court include questions around the communication to the public right, as do C-392/19 VG Bild-Kunst v Preussischer Kulturbesitz (Germany, BGH), C-442/19 Brein v News Service Europe (Netherlands, Supreme Court) and C-597/19 Mircom v Telenet (Belgium). Advocate General Opinions have been delivered in YouTube/Cyando, VG Bildt-Kunst and Mircom.

YouTube/Cyando and Brein v News Service Europe also raise questions about copyright injunctions against intermediaries, as does C-500/19 Puls 4 TV.

Linking, search metadata and database right

C-762/19 CV-Online Latvia is a CJEU referral from Riga Regional Court concerning database right. The defendant search engine finds websites that publish job advertisements and uses hyperlinks to redirect users to the source websites, including that of the applicant. The defendant’s search results also include information - hyperlink, job, employer, geographical location of the job, and date – obtained from metatags on the applicant’s website published as Schema.org microdata. The questions for the CJEU are whether (a) the use of a hyperlink constitutes re-utilisation and (b) the use of the metatag data constitutes extraction, for the purposes of database right infringement.

Online intermediary liability

The UK government published its Full Consultation Response to the Online Harms White Paper on 15 December 2020, paving the way for a draft Online Safety Bill in 2021. The government has indicated that the draft Bill will be subject to pre-legislative scrutiny.

The German Federal Supreme Court has referred two cases (YouTube and Cyando – see above) to the CJEU asking questions about (among other things) the applicability of the ECommerce Directive hosting protections to UGC sharing sites. The Advocate General’s Opinion in these cases has been published.

Brein v News Service Europe and Puls 4 TV (see above for both) also ask questions around the Article 14 hosting protection, including whether it is precluded if communication to the public is found.

The European Commission published its proposals for a Digital Services Act and a Digital Markets Act on 15 December 2020. The proposed Digital Services Act includes replacements for Articles 12 to 15 of the ECommerce Directive.  The proposals will now proceed through the EU legislative process.

The European Commission’s Proposal for a Regulation on preventing the dissemination of terrorist content online is nearing the final stages of its legislative process, the Council and Parliament having reached political agreement on 10 December 2020. The proposed Regulation is notable for requiring one hour takedown response times and also for proactive monitoring obligations - potentially derogating from the ECommerce Directive Article 15 prohibition on imposing general monitoring obligations on conduits, caches and hosts.

The prospect of a post-Brexit UK-US trade agreement has prompted speculation that such an agreement might require the UK to adopt a provision equivalent to the US S.230 Communications Decency Act. However, if the US-Mexico-Canada Agreement precedent were adopted in such an agreement, that would appear not to follow (as explained here).

Cross-border 

The US and the UK signed a Data Access Agreement on 3 October 2019, providing domestic law comfort zones for service providers to respond to data access demands from authorities located in the other country. No announcement has yet been made that Agreement has entered into operation. The Agreement has potential relevance in the context of a post-Brexit UK data protection adequacy decision by the European Commission.

Discussions continue on a Second Protocol to the Cybercrime Convention, on evidence in the cloud.

State surveillance of communications


The kaleidoscopic mosaic of cases capable of affecting the UK’s 
Investigatory Powers Act 2016 (IP Act) continues to reshape itself. In this field CJEU judgments remain particularly relevant, since they form the backdrop to any data protection adequacy decision that the European Commission might adopt in respect of the UK post-Brexit. The recently agreed UK-EU Trade and Co-operation Agreement provides a period of up to 6 months for the Commission to propose and adopt an adequacy decision.

Relevant CJEU judgments now include, most recently, Privacy International (Case C-623/17), La Quadrature du Net (C-511/18 and C-512/18), and Ordre des barreaux francophones et germanophone (C-520/18) (see discussion here and here).

Domestically, Liberty has a pending judicial review of the IP Act bulk powers and data retention powers. Some EU law aspects (including bulk powers) were stayed pending the Privacy International reference to the CJEU. The Divisional Court rejected the claim that the IP Act data retention powers provide for the general and indiscriminate retention of traffic and location data, contrary to EU law. That point may in due course come before the Court of Appeal.

In the European Court of Human Rights, Big Brother Watch and various other NGOs challenged the pre-IP Act bulk interception regime under the Regulation of Investigatory Powers Act (RIPA). The ECtHR gave a Chamber judgment on 13 September 2018. That and the Swedish Rattvisa case were subsequently referred to the ECtHR Grand Chamber and await judgment. If the BBW Chamber judgment had become final it could have affected the IP Act in as many as three separate ways.

In response to one of the BBW findings the government has said that it will introduce ‘thematic’ certification by the Secretary of State of requests to examine bulk secondary data of individuals believed to be within the British Islands.

Software - goods or services?

Judgment is pending in the CJEU on a referral from the UK Supreme Court asking whether software supplied electronically as a download and not on any tangible medium constitutes goods and/or a sale for the purposes of the Commercial Agents Regulations (C-410/19 Computer Associates (UK) Ltd v The Software Incubator Ltd). The Advocate General’s Opinion was delivered on 17 December 2020.

Law Commission projects

The Law Commission has in train several projects that have the potential to affect online activity.

It is expected to make recommendations on reform of the criminal law relating to Harmful Online Communications in early 2021. The government has said that it will consider, where appropriate, implementing the Law Commission’s final recommendations through the forthcoming Online Safety Bill. The Law Commission issued a consultation paper in September 2020 (consultation closed 18 December 2020).

The Law Commission has also issued a Consultation Paper on Hate Crime Laws, which while not specifically focused on online behaviour inevitably includes it (consultation closed 24 December 2020).

It has recently launched a Call for Evidence on Smart Contracts (closing 31 March 2021) and is also in the early stages of a project on Digital Assets.

Electronic transactions

The pandemic has focused attention on legal obstacles to transacting electronically and remotely. Whilst uncommon in commercial transactions, some impediments do exist and, in a few cases, have been temporarily relaxed. That may pave the way for permanent changes in due course.

Although the question typically asked is whether electronic signatures can be used, the most significant obstacles tend to be presented by surrounding formalities rather than signature requirements themselves. A case in point is the physical presence requirement for witnessing deeds, which stands in the way of remote witnessing by video or screen-sharing. The Law Commission Report on Electronic Execution of Documents recommended that the government should set up an Industry Working Group to look at that and other issues.

Data Protection 

Traditionally this survey does not cover data protection (too big, and a dense specialism in its own right). On this occasion, however, the Lloyd v Google appeal pending in the UK Supreme Court should not pass without notice.

ePrivacy

EU Member States had to implement the Directive establishing the European Electronic Communications Code (EECD) by 21 December 2020. The Code brings ‘over the top’ messaging applications into the scope of ‘electronic communications services’ for the purpose of the EU telecommunications regulatory framework. As a result, the communications confidentiality provisions of the ePrivacy Directive also came into scope, affecting practices such as scanning to detect child abuse images. In order to enable such practices to continue, the European Commission proposed temporary legislation derogating from the ePrivacy Directive prohibitions. The proposed Regulation missed the 21 December deadline and continues through the EU legislative process.

Meanwhile there is as yet no conclusion to the long drawn out attempt to reach consensus on a proposed replacement for the ePrivacy Directive itself. 

[Updated 29 December 2020 to add sections on Data Protection and ePrivacy.] 




Thursday 17 December 2020

The Online Harms edifice takes shape

The government has now published the Final Response to its Consultation on the April 2019 Online Harms White Paper.

Background

To recap, in the White Paper the government proposed to impose a “duty of care” on companies whose services host user-generated content or facilitate public or private online interaction between users. The duty of care would also apply to search engines.

An intermediary in scope would have to take reasonable steps to prevent, reduce or mitigate harm occurring on its service, including lawful content and activity deemed to be harmful. By its nature the duty placed on the intermediary would be to prevent the risk of one third party user causing harm to someone else.

This proposal differed from offline duties of care in two main respects: First, the White Paper did not limit or define the notion of harm. Comparable safety-related duties of care in the offline world are about objectively ascertainable physical injury and damage to property. An  undefined concept of harm arising from online speech was inevitably subjective and malleable. It raised objections of impermissible vagueness, consequent arbitrariness, and the prospect of online speech being judged by the standard of the most easily offended reader, viewer or listener.

Second, in the offline world a safety-related duty of care that imposes liability for failure to prevent third parties injuring each other is the exception rather than the norm - and in any event has not been applied to speech.

The White Paper proposed that the intermediaries’ duty of care would be overseen and enforced by a discretionary regulator - subsequently indicated as likely to be Ofcom - reminiscent of the world of television and radio. This represented a radical departure from the offline world, in which individual speech is governed only by settled and certain general law, not broadcast-style regulation by regulator.

All this was presented under the banner of offline-online equivalence.

The effect of the proposed Online Harms regime, although presented as regulating the tech companies, is that the regulator would indirectly govern our own individual speech via the proxy of online intermediaries acting under the legal compulsion of the duty of care. If harm were left undefined and unlimited, then the regulator would in effect have the ability to write its own parallel rulebook for online speech – both as to what amounted to harm, and what steps an intermediary should take to mitigate the risk of speech that the regulator deemed to be harmful.

In February 2020 the government published an Initial Response to the White Paper signalling some revisions to the regime, in particular a ‘differentiated’ duty of care that would apply more lightly to content that was harmful but not illegal. There was still no attempt to define or limit the concept of harm.

The government has now confirmed that Ofcom will be the scheme’s discretionary regulator. The Final Response proposes a number of significant changes to the regime described in the White Paper.

Harms in scope

The most significant development is that the government has now:

  • Proposed a general definition of “harmful” content and activity: it must give rise to a “reasonably foreseeable risk of a significant adverse physical or psychological impact on individuals”. [2.2] 
  • Significantly limited what counts as illegal user content and activity for the purposes of the duty of care: excluding civil liability altogether and also limiting the kinds of criminal offences in scope to those that meet the general definition of “harmful” [2.24].

It has also confirmed previous indications that harms to organisations will not be in scope. [2.2, 4.1] Nor would intellectual property breaches, data protection breaches, fraud, breaches of consumer protection law, cyber security breaches or hacking. Harm arising from dark web activity would also be excluded. [2.3]

The combined effect of these steps is that the subject matter of the duty of care has moved in the direction of comparable offline duties of care. It is now more focused towards personal safety properly so-called, rather than resting on unbounded notions of harm. That is also reflected in the new name for the legislation: the Online Safety Bill.

By way of example, the government now explains that disinformation should not be regarded as per se dangerous, and that to do that would trespass unacceptably on freedom of speech:

“the duty of care will apply to content or activity which could cause significant physical or psychological harm to an individual, including disinformation and misinformation. Where disinformation is unlikely to cause this type of harm it will not fall in scope of regulation. Ofcom should not be involved in decisions relating to political opinions or campaigning, shared by domestic actors within the law.” [2.81]

This paragraph recalls the difference of opinion between Home Office and DCMS Ministers over 5G conspiracy theories when giving evidence to the Home Affairs Committee in May 2020.

Nevertheless, the definition of harmful remains problematic: not least because inclusion of ‘psychological impact’ may suggest that the notion of harm is still tied to variable, subjective reactions of different readers. Subjectivity opens the door to application of a standard of the most readily upset user. And while the subject matter of the duty of care may be more closely aligned with traditional duties of care, its nature – a duty to prevent third parties from harming each other – remains the exception, not the norm, in the offline world.

The Final Response proposes the creation, by secondary legislation, of specific ‘priority categories’ of harmful content and criminal offences, posing the greatest risk to individuals. [24], [2.3], [2.20]. The significance of these categories would be in underpinning a reformulated version of the ‘differentiated’ duty of care that was floated in the government’s Initial Response (see further below).

Providers and services in scope

Under the revised proposals, in-scope providers would be split into two categories of provider, subject to versions of the duty of care differing both as to what steps would be required to discharge the duty of care, and in respect of what kinds of harmful content. Only services designated as Category 1 would be duty-bound to address legal but harmful content.

Ofcom would determine which services meet the criteria for Category 1, according to thresholds previously set by the government. The relevant factors would be set out in the legislation: size of audience and functionalities offered.

According to the Response, functionalities such as the ability to share content widely or contact users anonymously are more likely to give rise to harm. [2.16]. When world-wide availability is an inherent feature of the internet, to treat the ability to share content widely as inherently risky is challenging for a government that proclaims that freedom of expression is at the heart of the proposed regulatory framework [1.10]. Contrary to the popular slogan, freedom of reach is indeed an aspect of freedom of speech - as the Supreme Court of India has held:

"There is no dispute that freedom of speech and expression includes the right to disseminate information to as wide a section of the population as is possible. The wider range of circulation of information or its greater impact cannot restrict the content of the right nor can it justify its denial." 

In the offline world, providing a venue specifically for activities that create a risk of danger is one situation in which a duty to prevent visitors injuring each other can arise. But to suggest that merely enabling individuals to speak to a large audience is a dangerously risky activity verges on an existential challenge to freedom of speech.

The Response excludes from scope:

  • certain ‘low-risk’ activities: user comments on digital content in relation to content directly published by a service. This would exclude online product and service reviews and ‘below the line’ reader comments on news website articles. [1.7]
  • three kinds of service: (a) B2B services as previously signalled in the Initial Response, (b) online services managed by educational institutions already subject to sufficient safeguarding duties or expectations, and (c) e-mail, voice telephone and SMS/MMS services. [1.6]

As to (c), the Response observes that “It is not clear what intermediary steps providers could be expected to take to tackle harm on these services before needing to resort to monitoring communications, so imposing a duty of care would be disproportionate.”

The result of the exclusions appears to be that the John Lewis customer review section would now be out of scope, but a site such as Mumsnet would still be in scope.

OTT private messaging services remain in scope [1.5]. The Response takes an approach to those that differs markedly from SMS/MMS services. Messaging providers may be required to monitor communications on private communications services, potentially by two routes.

First, it appears that Ofcom may have discretion to include monitoring in a Code of Practice. (Strictly speaking, however, this would not be mandatory, since it is always open to a provider to demonstrate to Ofcom that it can fulfil its duty of care as effectively in some other way [2.48].) The non-statutory interim code of practice on online child sexual exploitation and abuse (CSEA) published by the Home Office alongside the Response provides that automated technology should be considered on a voluntary basis.

Second, Ofcom would have express power to require companies to use “automated technology that is highly accurate” to identify illegal CSEA content and activity. This power would be usable where alternative measures cannot effectively address CSEA. Whilst the Response comments that this power is more likely to be considered proportionate on public platforms than private services, private services are not excluded. Ofcom would be required to seek approval from Ministers before exercising the power, on the basis that sufficiently accurate tools exist. The Response notes that the government assesses that, currently, sufficiently accurate tools exist to identify CSEA material that has previously been assessed as illegal. [2.59. 2.60]

Encryption is not mentioned in the Response.

News media and journalism The potential application of the legislation to news media and journalism has been fraught from the outset. The White Paper did not mention the issue, following which the then Secretary of State wrote to the Society of Editors assuring them that “where these services are already well regulated, as IPSO and IMPRESS do regarding their members' moderated comment sections, we will not duplicate those efforts. Journalistic or editorial content will not be affected by the regulatory framework.”

This left questions unanswered, for instance the position of mainstream news media not regulated by IPSO or IMPRESS. Nor did it address the position of newspapers’ own social media pages and feeds, which would count as user generated content and thus be indirectly regulated by Ofcom via the intermediaries’ duty of care.

The Final Response is, if anything, less clear than previously. It confirms that comment sections on news publishers’ websites would be out of scope, by virtue of the ‘low risk’ user comments exclusion mentioned above.  For social media feeds, it says that legislation will include ‘robust protections’ for journalistic content shared on in-scope services. As to what those protections might be, and what might count as journalistic content, the Response is silent. [1.10, 1.12]

Differentiated duty of care

The Initial Response proposed a differentiated duty of care, whereby for legal but harmful material and activities in-scope providers would be required only to enforce transparently, consistently and (perhaps) effectively, the standards that they chose to incorporate in their terms and conditions.

It always did seem unlikely that, for ‘legal but harmful’ content, the government intended to leave intermediaries completely to their own devices as to what standards (if any) to incorporate in their user terms and conditions. In 2018, after all, the government had said in its consultation response to the Internet Safety Strategy Green Paper that:

“The government has made clear that we require all social media platforms to have [inter alia]: Terms and conditions that provide a minimum level of safety and protection for users”.]

So it has proved.  The proposal in the Final Response is complex and nuanced. Its main features are:

  • Providers that exceed specified audience and functionality thresholds will be designated as Category 1 providers (see above). 
  • All in-scope providers will be expected to assess whether children are likely to access their services and, if so, to take additional protections for children using them [2.15] 
  • Only Category 1 providers will be required to take action with regard to legal but harmful content and activity accessed by adults [2.15].
  • The duty of care of non-Category 1 providers for adults would therefore apply only in relation to criminal content and activities (of a kind not otherwise excluded) that present a reasonably foreseeable risk of a significant adverse physical or psychological impact on individuals.

It should follow, although the Response does not spell this out completely clearly, that for non-Category 1 providers the general obligations listed below (such as risk assessment) would apply only in relation to the risk of such criminal content activities – and that ‘safety’ should also be understood in that sense. 

For Category 1 providers the general obligations would apply additionally to legal content and activity presenting a reasonably foreseeable risk of a significant adverse physical or psychological impact on individuals. 

General obligations

  • All in-scope providers have a primary responsibility to take action to prevent user-generated content or activity on their services causing significant physical or psychological harm to individuals. To do this they will complete an assessment of the risks associated with their services and take reasonable steps to reduce the risks of the harms they have identified occurring. [2.7]
  • Providers will fulfil the duty of care by putting in place systems and processes that improve user safety on their services – including, for example, user tools, content moderation and recommendation procedures. [2.9]
  • Providers will be required to consider users’ rights, including freedom of expression online, both as part of the risk assessment and when making decisions on what safety systems and processes to put in place. [2.10]
  • Regulation will ensure transparent and consistent application of terms and conditions relating to harmful content. This will include preventing companies from arbitrarily removing content. [2.10]
  • Users must be able to report harm when it does occur and seek redress, challenge wrongful takedown and raise concerns about companies’ compliance with their duties. [2.11]
  • All providers will have a specific legal duty to have effective and accessible reporting and redress mechanisms. This will cover harmful content and activity, infringement of rights (such as over-takedown), or broader concerns about a company’s compliance with its regulatory duties [2.12]
Illegal content and activities
  • For in-scope criminal activity, all providers will need to ensure that illegal content is removed expeditiously and that the risk of it appearing and spreading across their services is minimised by effective systems [2.19]
  • Priority categories of offences, against which providers will be required to take particularly robust action, will be set out in secondary legislation. [2.20] For CSEA and terrorism this may include proactively identifying and blocking or removing this type of material if other steps have not been effective and safeguards are in place. [2.21]

The Response is silent as to how such an obligation may be consistent with the prohibition on general monitoring obligations under Article 15 of the eCommerce Directive. The government has said, in the context of Brexit, that it has no current plans to change the UK’s approach to prohibition on general monitoring requirements.

Legal but harmful content and activity accessed by adults (Category 1 providers only)

  • The legislation will not require removal of specific pieces of legal content [2.28], unless specified as not permitted by the provider’s terms and conditions [2.33] Terms and conditions could be about, for example, labelling and de-prioritising [2.32].
  • Priority categories of legal but harmful material will be set out in secondary legislation. These will be categories of legal but harmful material that Category 1 providers should, at a minimum, address through their terms and conditions. The Response gives the examples of content promoting self-harm, hate content, online abuse that does not meet the threshold of a criminal offence, and content encouraging or promoting eating disorders. [2.29]
  • Category 1 providers will be obliged to state how they will handle other categories of legal but harmful material identified in their risk assessment and make clear what is acceptable on their services for that content. [2.31]

Controversial viewpoints

  • Category 1 companies will not be able to arbitrarily remove controversial viewpoints and users will be able to seek redress if they feel that content has been removed unfairly. [2.34]
  • User redress mechanisms will enable users to challenge content that unduly restricts their freedom of expression. This appears to apply to all in-scope providers (Annex A).

These provisions appear to be the ‘impartiality’ requirements that were trailed in the press before the release of the Final Response, reportedly at the instigation of 10 Downing Street. It is unclear whether these provisions are intended to override substantive policies set out in providers’ terms and conditions. They appear to be unrelated to, or at least to go wider than, issues about illegal or harmful content.

Children

  • All companies in scope will required to assess the likelihood of children accessing their service. [2.36] Only services likely to be accessed by children will be required to provide additional protections for children accessing them, starting with conducting a specific child safety risk assessment. [2.36], [2.37]
  • The government will set out in secondary legislation priority categories of legal but harmful content and activity impacting children, meeting the general definition of harmful content and activity already described. These will be categories impacting children that companies in scope should, at a minimum, take action on. [2.38]
  • Age assurance and age verification technologies are expected to play a key role in fulfilling the duty of care. [2.41]

Codes of Practice

The Final Response has increased the amount of influence that the government will have over Ofcom’s Codes of Practice. Ofcom will be required to send the final draft of a Code of Practice to the Culture Secretary and the Home Secretary, who will have the power to reject a draft code and require the regulator to make modifications for reasons relating to government policy.

Parliament will have the opportunity to debate and vote on the high level objectives set out by the government for the Codes of Practice by the affirmative resolution procedure. Completed codes will be laid in Parliament, subject to negative resolution. [4.10]

Search engines

Little is said in the Final Response about how the proposed duty of care would apply to search engines, beyond a brief summary of actions that they can take to mitigate the risk of harm and proportionate systems and processes that they would be expected to put in place to keep their users safe.

Search engines would need to assess the risk of harm occurring across their entire service. Ofcom would provide guidance specific to search engines regarding regulatory expectation

The government proposes that given the distinct nature of search engines, legislation and codes of practice would include specific material for them. It says that all regulatory requirements would be proportionate, and respect the key role of search engines in enabling access to information online. [1.3]

Territoriality

For the first time, the Final Response has set out the proposed territorial reach of the proposed legislation. Somewhat surprisingly, it appears to propose that services should be subject to UK law on a ‘mere availability of content’ basis. Given the default cross-border nature of the internet, this is tantamount to legislating extraterritorially for the whole world. It would follow that any provider anywhere in the rest of the world would have to geo-fence its service to exclude the UK in order to avoid engaging UK law. Legislating on a mere availability basis has been the subject of criticism over many years since the advent of the internet. [1.1]

Overall commentary

The fundamental issues with the government’s White Paper proposals have been exhaustively discussed on previous occasions. Reminiscent of a sheriff in the Wild West, to which the internet is so often likened, Ofcom would enlist deputies - social media platforms and other intermediaries acting under a legal duty of care - to police the unruly online population. Unlike its Wild West equivalent, however, Ofcom would get to define its territory and write the rules, as well as enforce them.

The introduction of a general definition of harm would tie Ofcom’s hands to some degree in deciding what does and does not constitute harmful speech. Limiting the scope of ‘harm’ to a reasonably foreseeable risk of a significant adverse physical or psychological impact on individuals goes some way to align the proposed duty of care more closely with analogous offline duties of care, which are specifically safety-related.

Nevertheless, when applied in the context of speech there remain significant problems.

1. What is an adverse psychological impact? Does it have to be a medically recognised condition? If not, how wide is it meant to be? Is distress sufficient? The broader the meaning, the closer we come to a limitation that could mean little or nothing more than being upset or unhappy. The less clear the meaning, the more discretion would be vested in Ofcom to decide what counts as harm, and the more likely that providers would err on the side of caution in determining what kinds of content or activity are in scope of their duty of care.

2. The difficulty, not to say virtual impossibility, of the task faced by the regulator and providers should not be underestimated. Thus, for the lawful but harmful category, the government has said that it will include online abuse as a priority category in secondary legislation. However, on the basis of these proposals that must be limited to abuse that falls within the general definition of harm – i.e. abuse that presents a reasonably foreseeable risk of a significant adverse physical or psychological impact on individuals. The provider’s actions under the duty of care should relate only to such harmful abuse. Where, concretely, is the dividing line between abuse that does and does not carry a foreseeable risk of adverse psychological impact? What content falls on either side of the line?

The provider would also have to take into account the proposed obligation not to remove controversial viewpoints and the possibility of user redress for unduly restricting their freedom of expression. Coincidentally, the Divisional Court in Scottow v CPS has in the last few days issued a judgment in which it referred to “the well-established proposition that free speech encompasses the right to offend, and indeed to abuse another”.

These issues illustrate the care that has to be taken with using terms such as ‘online abuse’ to cover everything from strong language, through insults, to criminal threats of violence.

3. What is the threshold to trigger the duty of care? Is it the risk that someone, somewhere, might read something and claim to suffer an adverse psychological impact as a result? Is it a risk gauged according to the notional attributes of a reasonably tolerant hypothetical user, or does the standard of the most easily upset apply? How likely does it have to be that someone might suffer an adverse psychological impact if they read it? Is a reasonably foreseeable, but low, possibility sufficient? 

The Media Minister John Whittingdale, writing in the Daily Mail on the morning of the publication of the Final Response, said:

“This is not about an Orwellian state removal of content or building a ‘woke-net’ where causing offence leads to instant punishment.  Free speech includes the right to offend, and adults will still be free to access content that others may disapprove of.”

If risk and harm thresholds are sufficiently low and subjective, that is what would result.

4. Whatever the risk threshold might be, would it be set out in tightly drawn legislation or left to the discretion of Ofcom? It will not be forgotten that Ofcom, in a 2018 survey, suggested to respondents that ‘bad language’ is a harmful thing. A year later it described “offensive language” as a “potential harm”.

5. Lastly, in the absence of deliberate intent an author owes no duty avoid causing harm to a reader of their work, even though psychological injury may result from reading it. That was confirmed by the Supreme Court in Rhodes. The government’s proposals would therefore mean that an intermediary would have a duty to consider taking steps in relation to material for which the author itself has no duty of care.

These are difficult issues that go to the heart of any proposal to impose a duty of care. They ought to have been the subject of debate over the last couple of years. Unfortunately they have been buried in the rush to include every conceivable kind of harm - however unsuited it might be to the legal instrument of a duty of care - and in discussions of ‘systemic’ duties of care abstracted from consideration of what should and should not amount to harm.

It should be no surprise if the government’s proposals became bogged down in a quagmire resulting from the attempt to institute a universal law of everything, amounting to little more than a vague precept not to behave badly online. The White Paper proposals were a castle built on quicksand, if not thin air.

The proposed general definition of harm, while not perfect, gives some shape to the edifice. It at least sets the stage for a proper debate on the limits of a duty of care, the legally protectable nature of personal safety online, and its relationship to freedom of speech – even if that should have taken place two years ago. Whether regulation by regulator is the appropriate way to supervise and police an appropriately drawn duty of care in relation to individual speech is another matter.