Two years ago Carnegie UK proposed that an Online Harms Reduction Bill could be legislated in twenty clauses. Faced with the 194 clauses and 14 Schedules of the Online Safety Bill, one is half-tempted to hanker after something equally minimal.
But only half-tempted. One reason for the Bill’s complexity is the rule of law requirement that the scope and content of a harm-based duty of care be articulated with reasonable certainty. That requires, among other things, deciding and clearly defining what does and does not count as harm. If no limits are set, or if harm is defined in vague terms, the duty will inevitably be arbitrary. If harm is to be gauged according to the subjective perception of someone who encounters a post or a tweet, that additionally raises the prospect of a veto for the most readily offended.
These kinds of issue arise for a duty of care as soon as it is extended beyond risk of objectively ascertainable physical injury: the kind of harm for which safety-related duties of care were designed. In short, speech is not a tripping hazard.
Another source of complexity is that the contemplated duties of care go beyond the ordinary scope of a duty of care – a duty to avoid causing injury to someone – into the exceptional territory of a duty to prevent other people from injuring each other. The greater the extent of a duty of care, the greater the need to articulate its content with reasonable precision.
There is no escape from grappling with these problems once we start down the road of trying to apply a duty of care model to online speech. The issues with an amorphous duty of care, and its concomitant impact on internet users’ freedom of expression, inevitably bubbled to the surface as the White Paper consultation proceeded.
The government’s response has been twofold: to confine harm to ‘physical or psychological harm’; and to spell out in detail a series of discrete duties with varying content, some based on harm and some on illegality of various kinds. The result, inevitably, is length and complexity.
The attempt to fit individual online speech into a legal structure designed for tripping hazards is not, however, the only reason why the Bill is 197 pages longer than Carnegie UK's proposal.
Others include:
- Inclusion of search engines as well as public user-to-user (U2U) platforms and private messaging services.
- Exclusion of various kinds of low-risk service.
- The last minute inclusion of non-U2U pornography sites, effectively reviving the stalled and un-implemented Digital Economy Act 2017.
- Inclusion of duties requiring platforms to judge and police certain kinds of illegal user content.
- Implementing a policy agenda to require platforms to act proactively in detecting and policing user content.
- Setting out what kinds of illegal content are in and out of scope.
- Specific Ofcom powers to mandate the use of technology for detecting and removing terrorism and CSEA content. The CSEA powers could affect the ability to use end-to-end encryption on private messaging platforms.
- Children-specific duties, including around age verification and assurance.
- Provisions around fraudulent online advertising, included at the last minute.
- The commitment made in April 2019 by then Culture Secretary Jeremy Wright QC that “where these services are already well regulated, as IPSO and IMPRESS do regarding their members' moderated comment sections, we will not duplicate those efforts. Journalistic or editorial content will not be affected by the regulatory framework.”. The echoes of this promise continue to reverberate, with the government likely to put forward amendments to the Bill during its passage through Parliament.
- Restricting the freedom of large platforms to remove some kinds of ‘high value’ user content.
- Setting out when Ofcom can and cannot require providers to use approved proactive monitoring technology.
- Enactment of several new communications criminal offences
At the heart of the Bill are the safety duties:
- The illegality safety duty for U2U services
- The illegality safety duty for search engines
- The “content harmful to adults” safety duty for Category 1 (large high-risk) U2U services
- The “content harmful to children” safety duty for U2U services likely to be accessed by children (meaning under-18s)
- The “content harmful to children” safety duty for search services likely to be accessed by children
The U2U illegality safety duty is imposed on all in-scope user to user service providers (an estimated 20,000 micro-businesses, 4,000 small and medium businesses and 700 large businesses. Those also include 500 civil society organisations). It is not limited to high-profile social media platforms. It could include online gaming, low tech discussion forums and many others.
The structure shown in the diagram below follows a pattern common to all five duties: a preliminary risk assessment duty which underpins and, to an extent, feeds into the substantive safety duty.
The role played by “Priority Illegal Content” (the red boxes) is key to the safety duty. This kind of content triggers the proactive monitoring obligations in S9(3)(a) and (b): to prevent users encountering such content and to minimise the length of time for which any such content is present. This has a “predictive policing” element, since illegal content includes content that would be illegal content if it were, hypothetically, on the service.
The countervailing S.19 duties to have regard to the importance of protecting users’ freedom of expression within the law and in relation to users’ privacy rights (which now also mentions data protection) are off-diagram.
For the U2U illegality safety duty the Bill has made several significant changes compared with the draft Bill:
- An initial list of Priority criminal offences is included in Schedule 7 of the Bill. Previously any offences beyond terrorism and CSEA were to be added by secondary legislation. Schedule 7 can be amended by secondary legislation.
- Greater emphasis on proactive monitoring and technology. The new ‘design and operation’ element of the risk assessment expressly refers to proactive technology. Ofcom’s safety duty enforcement powers (which in the draft Bill did not permit Ofcom to require use of proactive technology) now allow it to do so in support of the S.9(2) and 9(3) duties, for publicly communicated illegal content.
- The draft Bill set the duty-triggering threshold as the service provider having “reasonable grounds to believe” that the content is illegal. That has now gone. [But a government amendment at Report Stage is introducing "reasonable grounds to infer".]
This illustrates the underlying dilemma that arises with imposing removal duties on platforms: set the duty threshold low and over-removal of legal content is mandated. Set the trigger threshold at actual illegality and platforms are thrust into the role of judge, but without the legitimacy or contextual information necessary to perform the role; and certainly without the capability to perform it at scale, proactively and in real time. Apply the duty to subjectively perceived speech offences (such as the new harmful communications offence) and the task becomes impossible.
This kind of consideration is why Article 15 of the EU eCommerce Directive prohibits Member States from imposing general monitoring obligations on online intermediaries: not for the benefit of platforms, but for the protection of users.
Post-Brexit the UK is free to depart from Article 15 if it so wishes. In January 2021 the government expressly abandoned its previous policy of maintaining alignment with Article 15. The Bill includes a long list of ‘priority illegal content’ which U2U providers will be expected to proactively to detect and remove, backed up with Ofcom’s new enforcement powers to require use of proactive technology.
Perhaps the most curious aspects of the U2U illegality risk assessment and safety duties are the yellow boxes. These are aspects of the duties that refer to “harm” (defined as physical or psychological harm). Although they sit within the two illegality duties, none of them expressly requires the harm to be caused by, arise from or be presented by illegality – only ‘identified in’ the most recent illegal content risk assessment.
It is hard to imagine that the government intends these to be standalone duties divorced from illegality, since that would amount to a substantive ‘legal but harmful’ duty, which the government has disclaimed any intention to introduce. Nevertheless, the presumably intended dependence of harm on illegality could be put beyond doubt.
For comparison, above are the corresponding illegality duties applicable to search services. They are based on the U2U illegality duties, adapted and modified for search. The same comment about facially self-standing “harm” duties can be made as for the U2U illegality duties.
Harmful content duties
The safety duty that has attracted most debate is the “content harmful to adults” duty, for the reason that it imposes duties in relation to legal content. It applies only to platforms designated as Category 1: those considered to be high risk by reason of size and functionality.
Critics argue that the Bill should not trespass into areas of legal speech, particularly given the subjective terms in which “content harmful to adults” is couched. The government’s position has always been that the duty was no more than a transparency duty, under which platforms would be at liberty to permit content harmful to adults on their services so long as they are clear about that in their terms and conditions. The implication was that a platform was free to take no steps about such content, although whether the wording of the draft Bill achieved that was debatable.
The Bill makes some significant changes, which can be seen in the diagram.
- It scraps the previous definition of non-designated ‘content harmful to adults’, consigning the “Adult of Ordinary Sensibilities” and its progeny to the oblivion of pre-legislative history.
- In its place, non-designated content harmful to adults is now defined as “content of a kind which presents a material risk of significant harm to an appreciable number of adults in the UK”. Harm means physical or psychological harm, as elaborated in S.187.
- All risk assessment and safety duties now relate only to “priority content harmful to adults”, which will be designated in secondary legislation. The previous circularly-drafted regulation-making power has been tightened up.
- The only duty regarding non-designated content harmful to adults is to notify Ofcom if it turns up in the provider’s risk assessment.
- The draft Bill’s duty to state how harmful to adults is ‘dealt with’ by the provider is replaced by a provision stipulating that if any priority harmful content is to be ‘treated’ in one of four specified ways, the T&Cs must state for each kind of such content which of those is to be applied. (That, at least, appears to be what is intended.)
- As with the other duties, the new ‘design and operation’ element of the risk assessment expressly refers to proactive technology. However, unlike the other duties Ofcom’s newly extended safety duty enforcement powers do not permit Ofcom to require use of proactive technology in support of the content harmful to adults duties.
- The Bill introduces a new ‘User Empowerment Duty’. This would require Category 1 providers to prove users with tools enabling them (if they so wish) to be alerted to, filter and block priority content harmful to adults.
The two remaining safety duties are the ‘content harmful to children’ duties. Those apply respectively to user-to-user services and search services likely to be accessed by children. Such likelihood is determined by the outcome of a ‘children’s access assessment’ that an in-scope provider must carry out.
For U2U services, the duties are:
These duties are conceptually akin to the ‘content harmful to adults’ duty, except that instead of focusing on transparency they impose substantive preventive and protective obligations. Unlike the previously considered duties these have three, rather than two, levels of harmful content: Primary Priority Content, Priority Content and Non-Designated Content. The first two will be designated by the Secretary of State in secondary legislation. The definition of harm is the same as for the other duties.
The corresponding search service duty is:
Fraudulent advertising
The draft Bill’s exclusion of paid-for advertising is replaced by specific duties on Category 1 U2U services and Category 2A search services in relation to fraudulent advertisements. The main duties are equivalent to the S.9(a) to (c) and S.24(3) safety duties applicable to priority illegal content.
Pornography sites
The Bill introduces a new category of ‘own content’ pornography services, which will be subject to their own separate regime separate from user-generated content. On-demand programme services already regulated under the Communications Act 2003 are excluded.
Excluded news publisher content
The Bill, like the draft Bill before it, excludes ‘news publisher content’ from the scope of various provider duties. This means that a provider’s duties do not apply to such content. That does not prevent a provider’s actions taken in pursuance of fulfilling its duties from affecting news publisher content. News media organisations have been pressing for more protection in that respect. It seems likely that the government will bring forward an amendment during the passage of the Bill. According to one report that may require platforms to notify the news publisher before taking action.
The scheme for excluding news publisher content, together with the express provider duties in respect of freedom of expression, journalistic content and content of democratic importance (CDI), is shown in the diagram below.
The most significant changes over the draft Bill are:
Very early on, in 2018, the government asked the Law Commission to review the communications offences – chiefly the notorious S.127 of the Communications Act 2003 and the Malicious Communications Act 1988.
It is open to question whether the government quite understood at that time that S.127 was more restrictive than any offline speech law. Nevertheless, there was certainly a case for reviewing the criminal law to see whether the online environment merited any new offences, and to revise the existing communications offences. The Law Commission also conducted a review of hate crime legislation, which the government is considering.
The Bill includes four new criminal offences - harmful communications, false communications, threatening communications, and a ‘cyberflashing’ offence. Concomitantly, it would repeal S.127 and the 1988 Act.
Probably the least (if at all) controversial is the cyberflashing offence (albeit some will say that the requirements to prove intent or the purpose for which the image is sent set too high a bar).
The threatening communications offence ought to be uncontroversial. However, the Bill adopts different wording from the Law Commission’s recommendation. That focused on threatening a particular victim (the ‘object of the threat’, in the Law Commission’s language). The Bill’s formulation may broaden the offence to include something more akin to use of threatening language that might be encountered by anyone who, upon reading the message, could fear that the threat would be carried out (whether or not against them).
It is unclear whether this is an accident of drafting or intentional widening. The Law Commission emphasised that the offence should encompass only genuine threats: “In our view, requiring that the defendant intend or be reckless as to whether the victim of the threat would fear that the defendant would carry out the threat will ensure that only “genuine” threats will be within the scope of the offence.” (emphasis added) It was on this basis that the Law Commission considered that another Twitter Joke Trial scenario would not be a concern.
The harmful communications offence suffers from problems which the Law Commission itself did not fully address. It is the Law Commission’s proposed replacement for S.127(1) of the Communications Act 2003.
When discussing the effect of the ‘legal but harmful’ provisions of the Bill the Secretary of State said: “This reduces the risk that platforms are incentivised to over-remove legal material ... because they are put under pressure to do so by campaign groups or individuals who claim that controversial content causes them psychological harm.”
However, the harmful communications offence is cast in terms that create just that risk under the illegality duty, via someone inserting themselves into the ‘likely audience’ and alerting the platform (explained in this blogpost and Twitter thread). The false communications offence also makes use of ‘likely audience’, albeit not as extensively as the harmful communications offence.
Secretary of State powers
The draft Bill empowered the Secretary of State to send back a draft Code of Practice to Ofcom for modification to reflect government policy. This extraordinary provision attracted universal criticism. It has now been replaced by a power to direct modification “for reasons of public policy”. This is unlikely to satisfy critics anxious to preserve Ofcom's independence.
Excluded news publisher content
The Bill, like the draft Bill before it, excludes ‘news publisher content’ from the scope of various provider duties. This means that a provider’s duties do not apply to such content. That does not prevent a provider’s actions taken in pursuance of fulfilling its duties from affecting news publisher content. News media organisations have been pressing for more protection in that respect. It seems likely that the government will bring forward an amendment during the passage of the Bill. According to one report that may require platforms to notify the news publisher before taking action.
The scheme for excluding news publisher content, together with the express provider duties in respect of freedom of expression, journalistic content and content of democratic importance (CDI), is shown in the diagram below.
The most significant changes over the draft Bill are:
- The CDI duty is now to ensure that systems and processes apply to a “wide” diversity of political opinions in the same way.
- The addition of a requirement on all U2U providers to inform users in terms of service about their right to bring a claim for breach of contract if their content is taken down or restricted in breach of those terms.
Very early on, in 2018, the government asked the Law Commission to review the communications offences – chiefly the notorious S.127 of the Communications Act 2003 and the Malicious Communications Act 1988.
It is open to question whether the government quite understood at that time that S.127 was more restrictive than any offline speech law. Nevertheless, there was certainly a case for reviewing the criminal law to see whether the online environment merited any new offences, and to revise the existing communications offences. The Law Commission also conducted a review of hate crime legislation, which the government is considering.
The Bill includes four new criminal offences - harmful communications, false communications, threatening communications, and a ‘cyberflashing’ offence. Concomitantly, it would repeal S.127 and the 1988 Act.
Probably the least (if at all) controversial is the cyberflashing offence (albeit some will say that the requirements to prove intent or the purpose for which the image is sent set too high a bar).
The threatening communications offence ought to be uncontroversial. However, the Bill adopts different wording from the Law Commission’s recommendation. That focused on threatening a particular victim (the ‘object of the threat’, in the Law Commission’s language). The Bill’s formulation may broaden the offence to include something more akin to use of threatening language that might be encountered by anyone who, upon reading the message, could fear that the threat would be carried out (whether or not against them).
It is unclear whether this is an accident of drafting or intentional widening. The Law Commission emphasised that the offence should encompass only genuine threats: “In our view, requiring that the defendant intend or be reckless as to whether the victim of the threat would fear that the defendant would carry out the threat will ensure that only “genuine” threats will be within the scope of the offence.” (emphasis added) It was on this basis that the Law Commission considered that another Twitter Joke Trial scenario would not be a concern.
The harmful communications offence suffers from problems which the Law Commission itself did not fully address. It is the Law Commission’s proposed replacement for S.127(1) of the Communications Act 2003.
When discussing the effect of the ‘legal but harmful’ provisions of the Bill the Secretary of State said: “This reduces the risk that platforms are incentivised to over-remove legal material ... because they are put under pressure to do so by campaign groups or individuals who claim that controversial content causes them psychological harm.”
However, the harmful communications offence is cast in terms that create just that risk under the illegality duty, via someone inserting themselves into the ‘likely audience’ and alerting the platform (explained in this blogpost and Twitter thread). The false communications offence also makes use of ‘likely audience’, albeit not as extensively as the harmful communications offence.
Secretary of State powers
The draft Bill empowered the Secretary of State to send back a draft Code of Practice to Ofcom for modification to reflect government policy. This extraordinary provision attracted universal criticism. It has now been replaced by a power to direct modification “for reasons of public policy”. This is unlikely to satisfy critics anxious to preserve Ofcom's independence.
Extraterritoriality
The Bill maintains the previous enthusiasm of the draft Bill to legislate for the whole world.
The safety duties adopt substantially the same expansive definition of ‘UK-linked’ as previously: (a) a significant number of UK users; or (b) UK users form one of the target markets for the service (or the only market); or (c) there are reasonable grounds to believe that there is a material risk of significant harm to individuals in the UK presented by user-generated content or search content, as appropriate for the service.
Whilst a targeting test is a reasonable way of capturing services provided to UK users from abroad, the third limb verges on ‘mere accessibility’. That suggests jurisdictional overreach. As to the first limb, the Bill says nothing about how ‘significant’ should be evaluated. For instance, is it an absolute measure or to be gauged relative to the size of the service? Does it mean ‘more than insignificant’, or does it connote something more?
The new regime for own-content pornography sites adopts limbs (a) and (b), but omits (c).
The Bill goes on to provide that the duties imposed on user-to-user and search services extend only to (a) the design, operation and use of the service in the United Kingdom, and (b) in the case of a duty expressed to apply in relation to users of a service, its design, operation and use as it affects UK users. The own-content pornography regime adopts limb (a), but omits (b).
The new communications offences apply to an act done outside the UK, but only if the act is done by an individual habitually resident in England and Wales or a body incorporated or constituted under the law of England and Wales. It is notable that under the illegality safety duty: “for the purposes of determining whether content amounts to an offence, no account is to be taken of whether or not anything done in relation to the content takes place in any part of the United Kingdom.” The effect appears to be to deem user content to be illegal for the purposes of the illegality safety duty, regardless of whether the territoriality requirements of the substantive offence are satisfied.
The Bill maintains the previous enthusiasm of the draft Bill to legislate for the whole world.
The safety duties adopt substantially the same expansive definition of ‘UK-linked’ as previously: (a) a significant number of UK users; or (b) UK users form one of the target markets for the service (or the only market); or (c) there are reasonable grounds to believe that there is a material risk of significant harm to individuals in the UK presented by user-generated content or search content, as appropriate for the service.
Whilst a targeting test is a reasonable way of capturing services provided to UK users from abroad, the third limb verges on ‘mere accessibility’. That suggests jurisdictional overreach. As to the first limb, the Bill says nothing about how ‘significant’ should be evaluated. For instance, is it an absolute measure or to be gauged relative to the size of the service? Does it mean ‘more than insignificant’, or does it connote something more?
The new regime for own-content pornography sites adopts limbs (a) and (b), but omits (c).
The Bill goes on to provide that the duties imposed on user-to-user and search services extend only to (a) the design, operation and use of the service in the United Kingdom, and (b) in the case of a duty expressed to apply in relation to users of a service, its design, operation and use as it affects UK users. The own-content pornography regime adopts limb (a), but omits (b).
The new communications offences apply to an act done outside the UK, but only if the act is done by an individual habitually resident in England and Wales or a body incorporated or constituted under the law of England and Wales. It is notable that under the illegality safety duty: “for the purposes of determining whether content amounts to an offence, no account is to be taken of whether or not anything done in relation to the content takes place in any part of the United Kingdom.” The effect appears to be to deem user content to be illegal for the purposes of the illegality safety duty, regardless of whether the territoriality requirements of the substantive offence are satisfied.
Postscript
Some time ago I ventured that if the road to hell was paved with good intentions, this was a motorway. The government continues to speed along the duty of care highway.
It may seem like overwrought hyperbole to suggest that the Bill lays waste to several hundred years of fundamental procedural protections for speech. But consider that the presumption against prior restraint appeared in Blackstone’s Commentaries (1769). It endures today in human rights law. That presumption is overturned by legal duties that require proactive monitoring and removal before an independent tribunal has made any determination of illegality.
It is not an answer to say, as the government is inclined to do, that the duties imposed on providers are about systems and processes rather than individual items of content. For the user whose tweet or post is removed, flagged, labelled, throttled, capped or otherwise interfered with as a result of a duty imposed by this legislation, it is only ever about individual items of content.