Thursday, 18 August 2022

Reimagining the Online Safety Bill

“The brutal truth is that nothing is likely to trip up the Online Safety Bill.” So began a blogpost on which I was working just over a month ago. Fortunately, it was still unfinished when Boris Johnson imploded for the final time, the Conservative leadership election was triggered, and candidates – led by Kemi Badenoch - started to voice doubts about the freedom of speech implications of the Bill. Then the Bill’s Commons Report stage was put on hold until the autumn, to allow the new Prime Minister to consider how to proceed.

The resulting temporary vacuum has sucked in commentary from all sides, whether redoubled criticisms of the Bill, renewed pursuit of existing agendas, or alarm at the prospect of further delays to the legislation.

Delay, it should be acknowledged, was always hardwired into the Bill. The Bill’s regulatory regime, even at a weighty 218 pagesis a bare skeleton. It will have to be fleshed out by a sequence of secondary legislation, Ofcom codes of practice, Ofcom guidance, and designated categories of service providers - each with its own step by step procedures. That kind of long drawn-out process was inevitable once the decision was taken to set up a broadcast-style regime under the auspices of a discretionary regulator such as Ofcom. 

In July 2022 Ofcom published an implementation road-map that would result in the earliest aspect of the regulatory regime (illegality safety duties) going live in mid-2024. We have to wonder whether that would have proved to be optimistic even without the current leadership hiccup and – presumably - a period of reflection before the Bill can proceed further.

The Bill has the feel of a social architect’s dream house: an elaborately designed, exquisitely detailed (eventually), expensively constructed but ultimately uninhabitable showpiece; a showpiece, moreover, erected on an empty foundation: the notion that a legal duty of care can sensibly be extended beyond risk of physical injury to subjectively perceived speech harms. 

As such, it would not be surprising if, as the Bill proceeded, implementation were to recede ever more tantalisingly out of reach. As the absence of foundations becomes increasingly exposed, the Bill may be in danger not just of delay but of collapsing into the hollow pit beneath, leaving behind a smoking heap of internal contradictions and unsustainable offline analogies.

If, under a new Prime Minister, the government were to reimagine the Online Safety Bill, how might they do it? Especially, how might they achieve a quick win: a regime that could be put into effect immediately, rather than the best part of two years later - if ever? 

The most vulnerable part of the Bill is probably the ‘legal but harmful to adults’ provisions. However, controversial as they undoubtedly are, those are far from the most problematic features of the Bill.

Here are some other aspects that might be under the spotlight.

The new communications offences

The least controversial part of the Bill ought to be the new Part 10 criminal offences. Those could, presumably, come into force shortly after Royal Assent. However, some of them badly need fixing.

The new communications offences have been designed to replace the Malicious Communications Act 1988 and the notorious S.127 Communications Act 2003. They have the authority of the Law Commission behind them.

Unfortunately, the new offences are a mess. The harmful communications offence, in particular, will plausibly create a veto for those most readily distressed by encountering views that they regard as deeply repugnant, even if that reaction is unreasonable. That prospect, and the consequent risk of legal online speech being chilled or removed, is exacerbated when the offence is combined with the illegality duty that the Bill, in its present form, would impose on all U2U platforms and search engines.

Part 10 of the Bill also has the air of unfinished business, with calls for further new offences such as deliberately sending flashing images to epileptics.

Make it about safety?

The 2017 Green Paper that started all this was entitled Internet Safety Strategy. Come the April 2019 White Paper, that had metamorphosed into Online Harms. Some have criticised the Bill’s reversion to Online Safety, although in truth the change is more label than substance. It does, however, prompt the question whether a desire for some quick wins would be served by focusing the Bill, in substance as well as in name, on safety in its core sense.

That is where much of the original impetus for the Bill stemmed from. Suicide, grooming, child abuse, physically dangerous ‘challenges’, violence – these are the stuff of safety-related duties of care. It is well within existing duty of care parameters to consider whether a platform has done something that creates or exacerbates a risk of physical injury as between users; then, whether a duty of care should be imposed; and if so, a duty to take what kind of steps (preventative or reactive) and in what circumstances. Some kinds of preventative duty, however, involve the imposition of general monitoring obligations, which are controversial.  

A Bill focused on safety in its core sense – risk of physical injury - might usefully clarify and codify, in the body of the Bill, the contents of such a duty of care and the circumstances in which it would arise. A distinction might, for example, be drawn between positively promoting an item of user content, compared with simply providing a forum akin to a traditional message board or threading a conversation. 

Duties of care are feasible for risk of physical injury, because physical injury is objectively identifiable. Physical injuries may differ in degree, but a bruise and a broken wrist are the same kind of thing. We also have an understanding of what gives rise to risk of physical injury, be it an unguarded lathe or a loose floorboard. 

The same is not true for amorphous conceptions of harm that depend on the subjective perception of the person who encounters the speech in question. Speech is not a tripping hazard.  Broader harm-based duties of care do not work in the same way, if at all, for controversial opinions, hate, blasphemy, bad language, insults, and all the myriad kinds of speech that to a greater or lesser extent excite condemnation, inflame emotions, or provoke anger, distress and assertions of risk of suffering psychological harm. 

A subjective harm-based duty of care requires the platform to assess and weigh those considerations against the freedom of speech not only of the poster, but of all other users who may react differently to the same speech, then decide which should prevail. That is a fundamentally different exercise from the assessment of risk of physical injury that underpins a safety-related duty of care. An approach that assumes that risk of subjectively perceived speech harms can be approached in the same way as risk of objectively identifiable physical injury will inevitably end up floundering in the kind of morass in which the Bill now finds itself.

The difference from risk of physical injury was, perhaps unwittingly, illustrated in the context of the illegality duty by the then Digital Minister Chris Philp in the Bill’s Commons Committee stage. He was discussing the task that platforms would perform in deciding whether user content was illegal under the new ‘harmful communications’ offence (above). The platform would, he said, perform a balancing exercise in assessing whether the content was a contribution to a matter of public interest. No balancing exercise is necessary to determine whether a broken wrist is or is not a physical injury.

Again within the illegality duty, the new foreign interference offence under Clause 13 of the National Security Bill would be designated as a priority offence under the Online Safety Bill. That would require platforms to adjudge, among other things, risk of “spiritual injury”. 

The principled way to address speech considered to be beyond the pale is for Parliament to make clear, certain, objective rules about it – whether that be a criminal offence, civil liability on the user, or a self-standing rule that a platform is required to apply. Drawing a clear line, however, requires Parliament to give careful consideration not only to what should be caught by the rule, but to what kind of speech should not be caught, even if it may not be fit for a vicar’s tea party. Otherwise it draws no line, is not a rule and fails the rule of law test: that legislation should be drawn so as to enable anyone to foresee, with reasonable certainty, the consequences of their proposed action. 

Rethink the illegality duty?

Requiring platforms to remove illegal user content sounds simple, but isn’t. During the now paused Commons Report Stage debate on the Bill, Sir Jeremy Wright QC (who ironically, was the Secretary of State for Culture at the time when the White Paper was launched in April 2019), observed:

“When people first look at this Bill, they will assume that everyone knows what illegal content is and therefore it should be easy to identify and take it down, or take the appropriate action to avoid its promotion. …

… criminal offences very often are not committed just by the fact of a piece of content; they may also require an intent, or a particular mental state, and they may require that the individual accused of that offence does not have a proper defence to it.

The question of course is how on earth a platform is supposed to know either of those two things in each case.”   

He might also have added that the relevant factual material for any given offence will often include information that is outside anything of which the platform can have knowledge, especially for real-time automated filtering systems.

In any event, it is pertinent to ask how many offences exist for which illegality can be determined with confidence simply by looking at the content itself and nothing else? Illegality often requires assessment of intention (sometimes, but not always, intention can be inferred from the content), purpose, or of extrinsic factual information.  The Bill now contains an illuminating, but ultimately unsatisfactory, attempt (New Clause 14) to address these issues.

The underlying problem with applying the duty of care concept to illegality is that illegality is a complex legal construct, not an objectively ascertainable fact like physical injury. Adjudging its existence (or risk of such) requires both factual information (often contextual) and interpretation of the law. There is a high risk that legal content will be removed, especially for real time filtering at scale. For this reason, it is strongly arguable that human rights compliance requires a high threshold to be set for content to be assessed as illegal.

Given the increasingly (if belatedly) apparent problems with the illegality duty, what options might a government coming to it with a fresh eye consider? The current solution, as with so many problematic aspects of the Bill, is to hand it off to Ofcom. New Clause 15 would require Ofcom to produce guidance on how platforms should go about adjudging illegality in accordance with NC 14.

Assuming that the illegality duty were not dropped altogether, other possibilities might include:

  • Restrict any duty to offences where the existence of an offence (including any potentially available defences) is realistically capable of being adjudged on the face of the content itself with no further information.
  • For in-scope offences, raise the illegality determination threshold from reasonable grounds to infer to manifest illegality.

Steps of this kind might in any event be necessary to achieve ECHR compliance. They would also reflect broader traditions of protection of freedom of speech, such as the presumption against prior restraint.

Illegality across the UK

Another consequence of illegality being a legal construct is that criminal offences vary across the UK. The Bill requires a platform, when preventing or removing user content under its illegality duty, to treat a criminal offence in one part of the UK as if it applied to the whole of the UK. This has the bizarre consequence that platforms will, for instance, have to apply in parallel both the new harmful communications offence contained in the Bill and its repealed predecessor, S.127 of the Communications Act 2003. 

Why is that? Because S.127 will be repealed only for England and Wales and will remain in force in Scotland and Northern Ireland. Platforms would have to treat S.127 as if it still applied in England and Wales; and, conversely, the new England and Wales harmful communications offence as if it applied in Scotland and Northern Ireland. 

In Committee on 14 June 2022 the then Minister confirmed that: 

“…the effect of the clauses is a levelling up—if I may put it that way. Any of the offences listed effectively get applied to the UK internet, so if there is a stronger offence in any one part of the United Kingdom, that will become applicable more generally via the Bill.”

Of course the alternative of requiring platforms to undertake the (in reality impossible) task of deciding which UK law – England and Wales, Northern Ireland or Scotland - applied to which post or tweet, would hardly be less problematic.

At Report stage the government added an amendment to the Bill which would, for the future, mean that the non-priority illegality duty would apply only to offences enacted by the UK Parliament, or by devolved administrations with the consent of the Westminster government. If nothing else, that shines a brighter spotlight on the problem.   

The role of Ofcom

The most radical option, were the government looking for a legislative quick win that cuts out delay, would be to jettison Ofcom and its companion two year procession of secondary legislation, guidance and codes of practice. 

In truth, as a matter of principle it was always a bad idea to apply discretionary broadcast-style regulation to individual speech. The way to govern individual speech is with clear, certain laws of general application. A government so minded might consider aligning practicality with principle.

If Ofcom’s role were to be scrapped, what could replace it? One alternative might be to take existing common law duties of care relating to risk of physical injury as the starting point, clarify and codify them on the face of the Bill (not by secondary legislation), and provide for enforcement otherwise than through a regulator. 

That would require the scope and content of any duties under the Bill to be articulated, Goldilocks-style, to a reasonable level of clarity: not so abstract as to be vague, not so technology and business model-specific as to be unworkable, but just right. Granted, that is easier said than done; but still perhaps more achievable than attempting to launch the overladen supertanker that is now marooned in its passage through Parliament. 

This approach would, however, raise questions about how some kinds of duty could be made compatible with the hosting protections originating in the EU ECommerce Directive, to which the government remains committed (unlike the prohibition on general monitoring obligations, from which the government has distanced itself). 

There would then be the question of Ofcom’s proposed powers to issue notices requiring providers to use approved scanning and filtering technology. Those powers are at best controversial, raising a plethora of issues of their own.  If such powers were to be continued in some form, they could be a candidate for separate legislation, so that issues such as impact on privacy and end-to-end encryption could be brought out into the open and given the full debate that they deserve. 

Other parts of the Bill

There is much more to the Bill than the parts discussed so far: fraudulent advertising and age-verification of pornographic websites have Parts of the Bill to themselves. There are numerous children-related provisions, which to an extent overlap with the ICO Code of Practice on age-appropriate design and the currently mothballed Digital Economy Act 2017. Other aspects of the Bill include press exemptions promised at the time of the White Paper (which always looked likely to be undeliverable and are still the subject of heavy debate); and the provisions constraining how Category 1 platforms can treat journalism and content of democratic importance. 

These are all crafted on the basis of a regulatory regime operated by Ofcom. It would not be a simple matter to disentangle them from Ofcom, should the government contemplate a non-Ofcom fast track.

Online ASBIs

A refocused Bill could face the objection that it does not address some of the most unpleasant, yet currently legal, user behaviour that can be found online. That does not rule out the possibility of legislation that on its face (not consigned to secondary legislation) draws clear lines that may differ from those that apply today.  But if Parliament is unwilling or unable to draw clear lines to govern behaviour regarded as beyond the pale, what other possibilities exist?

One answer, albeit itself somewhat controversial, is already sitting on the legislative shelf, but (as far as can be seen) appears in the online context to be gathering dust.

The Anti-Social Behaviour, Crime and Policing Act 2014 contains a procedure for some authorities to obtain a civil anti-social behaviour injunction (ASBI, the successor to ASBOs) against someone who has engaged or threatens to engage in anti-social behaviour, meaning “conduct that has caused, or is likely to cause, harassment, alarm or distress to any person”. That succinctly describes online disturbers of the peace, albeit in very broad terms. It maps readily on to the most egregious abuse, cyberbullying, harassment and the like.

The Home Office Statutory Guidance on the use of the 2014 Act powers (revised in December 2017, August 2019 and June 2022) makes no mention of their use in relation to online behaviour. Yet nothing in the legislation restricts an ASBI to offline activities. Indeed, over 10 years ago The Daily Telegraph reported an 'internet ASBO' made under predecessor legislation against a 17 year old who had been posting material on the social media platform Bebo. The order banned him from publishing material that was threatening or abusive and promoted criminal activity.

ASBIs raise difficult questions of how they should be framed and of proportionality. Some may have concerns about the broad terms in which anti-social behaviour is defined in the legislation. Nevertheless, the courts to which applications are made should, at least in principle, have the societal and institutional legitimacy, as well as the experience and capability, to weigh such factors.

That said, the July 2020 Civil Justice Council Report “Anti-Social Behaviour and the Civil Courts” paints a somewhat dispiriting picture of the use of ASBIs offline. It highlights a practice of applying for orders ex parte – something that would be especially troubling for an ASBI that would affect the defendant’s freedom of expression. Concerns of that kind would have to be carefully addressed if online ASBIs were to be picked up and dusted off.

On the positive side, the usefulness of online ASBIs could be transformed if the government were to explore the possibility of extending beyond the official authorities the ability to apply to court for an online ASBI , for instance to selected voluntary organisations.

Finally, for a longer-term view of access to justice online, Section 9 of my submission to the Online Harms White Paper consultation has some blue-sky thoughts.

[Typo (2009) corrected to 2019, 19 Aug 2022.]



Saturday, 30 July 2022

Platforms adjudging illegality – the Online Safety Bill’s inference engine

The Online Safety Bill, before the pause button was pressed, enjoyed a single day’s Commons Report stage debate on 12 July 2022.  Several government amendments were passed and incorporated into the Bill.

One of the most interesting additions is New Clause 14 (NC14), which stipulates how user-to-user providers and search engines should decide whether user content constitutes a criminal offence. This was previously an under-addressed but nevertheless deep-seated problem for the Bill’s illegality duty. 

One underlying issue is that (especially for real-time proactive filtering) providers are placed in the position of having to make illegality decisions on the basis of a relative paucity of information, often using automated technology. That tends to lead to arbitrary decision-making.

Moreover, if the threshold for determining illegality is set low, large scale over-removal of legal content will be baked into providers’ removal obligations. But if the threshold is set high enough to avoid over-removal, much actually illegal content may escape. Such are the perils of requiring online intermediaries to act as detective, judge and bailiff.

NC 14 looks like a response to concerns raised in April 2022 by the Independent Reviewer of Terrorism Legislation over how a service provider’s illegality duty would apply to terrorism offences, for which (typically) the scope of acts constituting an offence is extremely broad. The most significant limits on the offences are set by intention and available defences – neither of which may be apparent to the service provider. As the Independent Reviewer put it:

“Intention, and the absence of any defence, lie at the heart of terrorism offending”.

He gave five examples of unexceptional online behaviour, ranging from uploading a photo of Buckingham Palace to soliciting funds on the internet, which if intention or lack of a defence were simply assumed, would be caught by the illegality duty. He noted:

“It cannot be the case that where content is published etc. which might result in a terrorist offence being committed, it should be assumed that the mental element is present, and that no defence is available. Otherwise, much lawful content would “amount to” a terrorist offence.”

If, he suggested, the intention of the Bill was that inferences about mental element and lack of defence should be drawn, then the Bill ought to identify a threshold. But if the intention was to set the bar at ‘realistic to infer’, that:

“does not allow sufficiently for freedom of speech. It may be “realistic” but wholly inaccurate to infer terrorist intent in the following words: “I encourage my people to shoot the invaders””

Issues of this kind are not confined to terrorism offences. There will be other offences for which context is significant, or where a significant component of the task of keeping the offence within proper bounds is performed by intention and defences.

Somewhat paraphrased, the answers provided to service providers by NC 14 are:

  • Base your illegality judgement on whatever relevant information you or your automated system have reasonably available.
  • If you have reasonable grounds to infer that all elements of the offence (including intention) are present, that is sufficient unless you have reasonable grounds to infer that a defence may be successful.
  • If an item of content surmounts the reasonable grounds threshold, and you do not have reasonable grounds to infer a defence, then you must treat it as illegal.

Factors relevant to reasonable availability of content include the size and capacity of the service provider, and whether a judgement is made by human moderators, automated systems or processes, or a combination of both. (The illegality duty applies not just to large social media operators but to all 25,000 providers within the scope of the Bill.)

Ofcom will be required to provide guidance to service providers about making illegality judgements.

What does this mean for users? It is users, let us not forget, whose freedom of expression rights are at risk of being interfered with as a result of the illegality removal duty imposed on service providers. The duty can be characterised as a form of prior restraint.

The first significant point concerns unavailability to the provider of otherwise potentially relevant contextual information. If, from information reasonably available to the provider (which at least for automated systems may be only the content of the posts themselves and, perhaps, related posts), it appears that there are reasonable grounds to infer that an offence has been committed, that is enough. At least for automated real-time systems, the possibility that extrinsic information might put the post in a different light appears to be excluded from consideration, unless its existence and content can be inferred from the posts themselves.

Alternatively, for offences that are highly dependent on context (including extrinsic context) would there be a point at which a provider could (or should) conclude that there is too little information available to support a determination of reasonable grounds to infer?

Second, the difference between elements of the offence itself and an available defence may be significant. The possibility of a defence is to be ignored unless the provider has some basis in the information reasonably available to it on which to infer that a defence may be successful.

Take ‘reasonable excuse’. For the new harmful communications offence that the Bill would enact, lack of reasonable excuse is an element of the offence, not a defence. A provider could not conclude that the user’s post was illegal unless it had reasonable grounds to infer (on the basis of the information reasonably available to it) that there was no reasonable excuse.

By contrast, two offences under the Terrorism Act 200o, S.58 (collection of information likely to be of use to a terrorist) and 58A (publishing information about members of the armed forces etc likely to be of use to a terrorist)) provide for a reasonable excuse defence.

The possibility of such a defence is to be ignored unless the provider has reasonable grounds (on the basis of the information reasonably available to it) to infer that a defence may be successful.

The difference is potentially significant when we consider that (for instance) journalism or academic research constitutes a defence of reasonable excuse under S.58. Unless the material reasonably available to the provider (or its automated system) provides a basis on which to infer that journalism or academic research is the purpose of the act, the possibility of a journalism or academic research defence is to be ignored. (If, hypothetically, the offence had been drafted similarly to the harmful communications offence, so that lack of reasonable excuse was an element of the offence, then in order to adjudge the post as illegal the provider would have had to have reasonable grounds to infer that the purpose was not (for instance) journalism or academic research.)

NC14 was debated at Report Stage. The Opposition spokesperson, Alex Davies-Jones, said:

“The new clause is deeply problematic, and is likely to reduce significantly the amount of illegal content and fraudulent advertising that is correctly identified and acted on.”

“First, companies will be expected to determine whether content is illegal or fraudulently based on information that is “reasonably available to a provider”, with reasonableness determined in part by the size and capacity of the provider. That entrenches the problems I have outlined with smaller, high-risk companies being subject to fewer duties despite the acute risks they pose. Having less onerous applications of the illegal safety duties will encourage malign actors to migrate illegal activity on to smaller sites that have less pronounced regulatory expectations placed on them.”

If information ‘reasonably available to a provider” is insufficiently stringent, what information should a provider be required to base its decision upon? Should it guess at information that it does not have, or make assumptions (which would trespass into arbitrariness)?

In truth it is not so much NC14 itself that is deeply problematic, but the underlying assumption (which NC14 has now exposed) that service providers are necessarily in a position to determine illegality of user content, especially where real time automated filtering systems are concerned.

Alex Davies-Jones went on:

“That significantly raises the threshold at which companies are likely to determine that content is illegal.”

We might fairly ask: raises the threshold compared with what? The draft Bill defined illegal user content as “content where the service provider has reasonable grounds to believe that use or dissemination of the content amounts to a relevant criminal offence.” That standard (which would inevitably have resulted in over-removal) was dropped from the Bill as introduced into Parliament, leaving it unclear what standard service providers were to apply.

The new Online Safety Minister (Damian Collins) said:

“The concern is that some social media companies, and some users of services, may have sought to interpret the criminal threshold as being based on whether a court of law has found that an offence has been committed, and only then might they act. Actually, we want them to pre-empt that, based on a clear understanding of where the legal threshold is. That is how the regulatory codes work. So it is an attempt, not to weaken the provision but to bring clarity to the companies and the regulator over the application.”

In any event, if the Opposition were under the impression that prior to NC14 the threshold in the Bill was lower than ‘reasonable grounds to infer’, what might that standard be? If service providers were obliged to remove user content on (say) a mere suspicion of possible illegality, does that sufficiently protect legal online speech? Would a standard set that low comply with the UK’s ECHR obligations, to which – whatever this government’s view of the ECHR may be - the Opposition is committed? Indeed it is sometimes said that the standard set by the ECHR is manifest illegality.

It bears emphasising that these issues around an illegality duty should have been obvious once an illegality duty of care was in mind: by the time of the April 2019 White Paper, if not before. Yet only now are they being given serious consideration.

It is ironic that in the 12 July Commons debate the most perceptive comments about how service providers are meant to comply with the illegality duty were made by Sir Jeremy Wright QC, the former Culture Secretary who launched the White Paper in April 2019. He said:

“When people first look at this Bill, they will assume that everyone knows what illegal content is and therefore it should be easy to identify and take it down, or take the appropriate action to avoid its promotion. But, as new clause 14 makes clear, what the platform has to do is not just identify content but have reasonable grounds to infer that all elements of an offence, including the mental elements, are present or satisfied, and, indeed, that the platform does not have reasonable grounds to infer that the defence to the offence may be successfully relied upon.

That is right, of course, because criminal offences very often are not committed just by the fact of a piece of content; they may also require an intent, or a particular mental state, and they may require that the individual accused of that offence does not have a proper defence to it.

The question of course is how on earth a platform is supposed to know either of those two things in each case. This is helpful guidance, but the Government will have to think carefully about what further guidance they will need to give—or Ofcom will need to give—in order to help a platform to make those very difficult judgments.”

Why did the government not address this fundamental issue at the start, when a full and proper debate about it could have been had?

This is not the only aspect of the Online Safety Bill that could and should have been fully considered and discussed at the outset. If the Bill ends up being significantly delayed, or even taken back to the drawing board, the government has only itself to blame.



Sunday, 27 March 2022

Mapping the Online Safety Bill

On 17 March 2022 the UK’s Online Safety Bill, no longer a draft, was introduced into Parliament and had its formal First Reading.

Two years ago Carnegie UK proposed that an Online Harms Reduction Bill could be legislated in twenty clauses. Faced with the 194 clauses and 14 Schedules of the Online Safety Bill, one is half-tempted to hanker after something equally minimal.

But only half-tempted. One reason for the Bill’s complexity is the rule of law requirement that the scope and content of a harm-based duty of care be articulated with reasonable certainty. That requires, among other things, deciding and clearly defining what does and does not count as harm. If no limits are set, or if harm is defined in vague terms, the duty will inevitably be arbitrary. If harm is to be gauged according to the subjective perception of someone who encounters a post or a tweet, that additionally raises the prospect of a veto for the most readily offended.

These kinds of issue arise for a duty of care as soon as it is extended beyond risk of objectively ascertainable physical injury: the kind of harm for which safety-related duties of care were designed. In short, speech is not a tripping hazard.

Another source of complexity is that the contemplated duties of care go beyond the ordinary scope of a duty of care – a duty to avoid causing injury to someone – into the exceptional territory of a duty to prevent other people from injuring each other. The greater the extent of a duty of care, the greater the need to articulate its content with reasonable precision.

There is no escape from grappling with these problems once we start down the road of trying to apply a duty of care model to online speech. The issues with an amorphous duty of care, and its concomitant impact on internet users’ freedom of expression, inevitably bubbled to the surface as the White Paper consultation proceeded.

The government’s response has been twofold: to confine harm to ‘physical or psychological harm’; and to spell out in detail a series of discrete duties with varying content, some based on harm and some on illegality of various kinds. The result, inevitably, is length and complexity.

The attempt to fit individual online speech into a legal structure designed for tripping hazards is not, however, the only reason why the Bill is 197 pages longer than Carnegie UK's proposal.

Others include:
  • Inclusion of search engines as well as public user-to-user (U2U) platforms and private messaging services.
  • Exclusion of various kinds of low-risk service.
  • The last minute inclusion of non-U2U pornography sites, effectively reviving the stalled and un-implemented Digital Economy Act 2017.
  • Inclusion of duties requiring platforms to judge and police certain kinds of illegal user content.
  • Implementing a policy agenda to require platforms to act proactively in detecting and policing user content.
  • Setting out what kinds of illegal content are in and out of scope.
  • Specific Ofcom powers to mandate the use of technology for detecting and removing terrorism and CSEA content. The CSEA powers could affect the ability to use end-to-end encryption on private messaging platforms.
  • Children-specific duties, including around age verification and assurance.
  • Provisions around fraudulent online advertising, included at the last minute.
  • The commitment made in April 2019 by then Culture Secretary Jeremy Wright QC that “where these services are already well regulated, as IPSO and IMPRESS do regarding their members' moderated comment sections, we will not duplicate those efforts. Journalistic or editorial content will not be affected by the regulatory framework.”. The echoes of this promise continue to reverberate, with the government likely to put forward amendments to the Bill during its passage through Parliament.
  • Restricting the freedom of large platforms to remove some kinds of ‘high value’ user content.
  • Setting out when Ofcom can and cannot require providers to use approved proactive monitoring technology.
  • Enactment of several new communications criminal offences
What does the published Bill now do and how has it changed from the draft Bill?

At the heart of the Bill are the safety duties:
  • The illegality safety duty for U2U services
  • The illegality safety duty for search engines
  • The “content harmful to adults” safety duty for Category 1 (large high-risk) U2U services
  • The “content harmful to children” safety duty for U2U services likely to be accessed by children (meaning under-18s)
  • The “content harmful to children” safety duty for search services likely to be accessed by children
The illegality safety duties

The U2U illegality safety duty is imposed on all in-scope user to user service providers (an estimated 20,000 micro-businesses, 4,000 small and medium businesses and 700 large businesses. Those also include 500 civil society organisations). It is not limited to high-profile social media platforms. It could include online gaming, low tech discussion forums and many others.

The structure shown in the diagram below follows a pattern common to all five duties: a preliminary risk assessment duty which underpins and, to an extent, feeds into the substantive safety duty.

The role played by “Priority Illegal Content” (the red boxes) is key to the safety duty. This kind of content triggers the proactive monitoring obligations in S9(3)(a) and (b): to prevent users encountering such content and to minimise the length of time for which any such content is present. This has a “predictive policing” element, since illegal content includes content that would be illegal content if it were, hypothetically, on the service.

The countervailing S.19 duties to have regard to the importance of protecting users’ freedom of expression within the law and in relation to users’ privacy rights (which now also mentions data protection) are off-diagram.

For the U2U illegality safety duty the Bill has made several significant changes compared with the draft Bill:
  • An initial list of Priority criminal offences is included in Schedule 7 of the Bill. Previously any offences beyond terrorism and CSEA were to be added by secondary legislation. Schedule 7 can be amended by secondary legislation.
  • Greater emphasis on proactive monitoring and technology. The new ‘design and operation’ element of the risk assessment expressly refers to proactive technology. Ofcom’s safety duty enforcement powers (which in the draft Bill did not permit Ofcom to require use of proactive technology) now allow it to do so in support of the S.9(2) and 9(3) duties, for publicly communicated illegal content.
  • The draft Bill set the duty-triggering threshold as the service provider having “reasonable grounds to believe” that the content is illegal. That has now gone. [But a government amendment at Report Stage is introducing "reasonable grounds to infer".]
The problem with the “reasonable grounds to believe” or similar threshold was is that it expressly bakesd in over-removal of lawful content. Although that has now been dropped, the Bill offers no replacement. It is silent on how clear it must be that the content is illegal in order to trigger the illegality duty.

This illustrates the underlying dilemma that arises with imposing removal duties on platforms: set the duty threshold low and over-removal of legal content is mandated. Set the trigger threshold at actual illegality and platforms are thrust into the role of judge, but without the legitimacy or contextual information necessary to perform the role; and certainly without the capability to perform it at scale, proactively and in real time. Apply the duty to subjectively perceived speech offences (such as the new harmful communications offence) and the task becomes impossible.

This kind of consideration is why Article 15 of the EU eCommerce Directive prohibits Member States from imposing general monitoring obligations on online intermediaries: not for the benefit of platforms, but for the protection of users. 

Post-Brexit the UK is free to depart from Article 15 if it so wishes. In January 2021 the government expressly abandoned its previous policy of maintaining alignment with Article 15. The Bill includes a long list of ‘priority illegal content’ which U2U providers will be expected to proactively to detect and remove, backed up with Ofcom’s new enforcement powers to require use of proactive technology.

Perhaps the most curious aspects of the U2U illegality risk assessment and safety duties are the yellow boxes. These are aspects of the duties that refer to “harm” (defined as physical or psychological harm). Although they sit within the two illegality duties, none of them expressly requires the harm to be caused by, arise from or be presented by illegality – only ‘identified in’ the most recent illegal content risk assessment.

It is hard to imagine that the government intends these to be standalone duties divorced from illegality, since that would amount to a substantive ‘legal but harmful’ duty, which the government has disclaimed any intention to introduce. Nevertheless, the presumably intended dependence of harm on illegality could be put beyond doubt.

For comparison, above are the corresponding illegality duties applicable to search services. They are based on the U2U illegality duties, adapted and modified for search. The same comment about facially self-standing “harm” duties can be made as for the U2U illegality duties.

Harmful content duties

The safety duty that has attracted most debate is the “content harmful to adults” duty, for the reason that it imposes duties in relation to legal content. It applies only to platforms designated as Category 1: those considered to be high risk by reason of size and functionality.

Critics argue that the Bill should not trespass into areas of legal speech, particularly given the subjective terms in which “content harmful to adults” is couched. The government’s position has always been that the duty was no more than a transparency duty, under which platforms would be at liberty to permit content harmful to adults on their services so long as they are clear about that in their terms and conditions. The implication was that a platform was free to take no steps about such content, although whether the wording of the draft Bill achieved that was debatable.

The Bill makes some significant changes, which can be seen in the diagram.
  • It scraps the previous definition of non-designated ‘content harmful to adults’, consigning the “Adult of Ordinary Sensibilities” and its progeny to the oblivion of pre-legislative history.
  • In its place, non-designated content harmful to adults is now defined as “content of a kind which presents a material risk of significant harm to an appreciable number of adults in the UK”. Harm means physical or psychological harm, as elaborated in S.187.
  • All risk assessment and safety duties now relate only to “priority content harmful to adults”, which will be designated in secondary legislation. The previous circularly-drafted regulation-making power has been tightened up.
  • The only duty regarding non-designated content harmful to adults is to notify Ofcom if it turns up in the provider’s risk assessment.
  • The draft Bill’s duty to state how harmful to adults is ‘dealt with’ by the provider is replaced by a provision stipulating that if any priority harmful content is to be ‘treated’ in one of four specified ways, the T&Cs must state for each kind of such content which of those is to be applied. (That, at least, appears to be what is intended.)
  • As with the other duties, the new ‘design and operation’ element of the risk assessment expressly refers to proactive technology. However, unlike the other duties Ofcom’s newly extended safety duty enforcement powers do not permit Ofcom to require use of proactive technology in support of the content harmful to adults duties.
  • The Bill introduces a new ‘User Empowerment Duty’. This would require Category 1 providers to prove users with tools enabling them (if they so wish) to be alerted to, filter and block priority content harmful to adults.
The four kinds of treatment of priority content harmful to adults that can trigger a duty to specify which is to be applied are: taking down the content, restricting users’ access to the content, limiting the recommendation or promotion of the content, or recommending or promoting the content. It can be seen that the first three are all restrictive measures, whereas the fourth is the opposite.

The two remaining safety duties are the ‘content harmful to children’ duties. Those apply respectively to user-to-user services and search services likely to be accessed by children. Such likelihood is determined by the outcome of a ‘children’s access assessment’ that an in-scope provider must carry out.

For U2U services, the duties are:

These duties are conceptually akin to the ‘content harmful to adults’ duty, except that instead of focusing on transparency they impose substantive preventive and protective obligations. Unlike the previously considered duties these have three, rather than two, levels of harmful content: Primary Priority Content, Priority Content and Non-Designated Content. The first two will be designated by the Secretary of State in secondary legislation. The definition of harm is the same as for the other duties.

The corresponding search service duty is:

Fraudulent advertising

The draft Bill’s exclusion of paid-for advertising is replaced by specific duties on Category 1 U2U services and Category 2A search services in relation to fraudulent advertisements. The main duties are equivalent to the S.9(a) to (c) and S.24(3) safety duties applicable to priority illegal content.

Pornography sites 

The Bill introduces a new category of ‘own content’ pornography services, which will be subject to their own separate regime separate from user-generated content. On-demand programme services already regulated under the Communications Act 2003 are excluded.

Excluded news publisher content

The Bill, like the draft Bill before it, excludes ‘news publisher content’ from the scope of various provider duties. This means that a provider’s duties do not apply to such content. That does not prevent a provider’s actions taken in pursuance of fulfilling its duties from affecting news publisher content. News media organisations have been pressing for more protection in that respect. It seems likely that the government will bring forward an amendment during the passage of the Bill. According to one report that may require platforms to notify the news publisher before taking action.

The scheme for excluding news publisher content, together with the express provider duties in respect of freedom of expression, journalistic content and content of democratic importance (CDI), is shown in the diagram below.

The most significant changes over the draft Bill are:
  • The CDI duty is now to ensure that systems and processes apply to a “wide” diversity of political opinions in the same way.
  • The addition of a requirement on all U2U providers to inform users in terms of service about their right to bring a claim for breach of contract if their content is taken down or restricted in breach of those terms.
Criminal offence reform

Very early on, in 2018, the government asked the Law Commission to review the communications offences – chiefly the notorious S.127 of the Communications Act 2003 and the Malicious Communications Act 1988.

It is open to question whether the government quite understood at that time that S.127 was more restrictive than any offline speech law. Nevertheless, there was certainly a case for reviewing the criminal law to see whether the online environment merited any new offences, and to revise the existing communications offences. The Law Commission also conducted a review of hate crime legislation, which the government is considering.

The Bill includes four new criminal offences - harmful communications, false communications, threatening communications, and a ‘cyberflashing’ offence. Concomitantly, it would repeal S.127 and the 1988 Act.

Probably the least (if at all) controversial is the cyberflashing offence (albeit some will say that the requirements to prove intent or the purpose for which the image is sent set too high a bar).

The threatening communications offence ought to be uncontroversial. However, the Bill adopts different wording from the Law Commission’s recommendation. That focused on threatening a particular victim (the ‘object of the threat’, in the Law Commission’s language). The Bill’s formulation may broaden the offence to include something more akin to use of threatening language that might be encountered by anyone who, upon reading the message, could fear that the threat would be carried out (whether or not against them).

It is unclear whether this is an accident of drafting or intentional widening. The Law Commission emphasised that the offence should encompass only genuine threats: “In our view, requiring that the defendant intend or be reckless as to whether the victim of the threat would fear that the defendant would carry out the threat will ensure that only “genuine” threats will be within the scope of the offence.” (emphasis added) It was on this basis that the Law Commission considered that another Twitter Joke Trial scenario would not be a concern.

The harmful communications offence suffers from problems which the Law Commission itself did not fully address. It is the Law Commission’s proposed replacement for S.127(1) of the Communications Act 2003.

When discussing the effect of the ‘legal but harmful’ provisions of the Bill the Secretary of State said: “This reduces the risk that platforms are incentivised to over-remove legal material ... because they are put under pressure to do so by campaign groups or individuals who claim that controversial content causes them psychological harm.”

However, the harmful communications offence is cast in terms that create just that risk under the illegality duty, via someone inserting themselves into the ‘likely audience’ and alerting the platform (explained in this blogpost and Twitter thread). The false communications offence also makes use of ‘likely audience’, albeit not as extensively as the harmful communications offence.

Secretary of State powers

The draft Bill empowered the Secretary of State to send back a draft Code of Practice to Ofcom for modification to reflect government policy. This extraordinary provision attracted universal criticism. It has now been replaced by a power to direct modification “for reasons of public policy”. This is unlikely to satisfy critics anxious to preserve Ofcom's independence.

Extraterritoriality

The Bill maintains the previous enthusiasm of the draft Bill to legislate for the whole world.

The safety duties adopt substantially the same expansive definition of ‘UK-linked’ as previously: (a) a significant number of UK users; or (b) UK users form one of the target markets for the service (or the only market); or (c) there are reasonable grounds to believe that there is a material risk of significant harm to individuals in the UK presented by user-generated content or search content, as appropriate for the service.

Whilst a targeting test is a reasonable way of capturing services provided to UK users from abroad, the third limb verges on ‘mere accessibility’. That suggests jurisdictional overreach. As to the first limb, the Bill says nothing about how ‘significant’ should be evaluated. For instance, is it an absolute measure or to be gauged relative to the size of the service? Does it mean ‘more than insignificant’, or does it connote something more?

The new regime for own-content pornography sites adopts limbs (a) and (b), but omits (c).

The Bill goes on to provide that the duties imposed on user-to-user and search services extend only to (a) the design, operation and use of the service in the United Kingdom, and (b) in the case of a duty expressed to apply in relation to users of a service, its design, operation and use as it affects UK users. The own-content pornography regime adopts limb (a), but omits (b).

The new communications offences apply to an act done outside the UK, but only if the act is done by an individual habitually resident in England and Wales or a body incorporated or constituted under the law of England and Wales. It is notable that under the illegality safety duty: “for the purposes of determining whether content amounts to an offence, no account is to be taken of whether or not anything done in relation to the content takes place in any part of the United Kingdom.” The effect appears to be to deem user content to be illegal for the purposes of the illegality safety duty, regardless of whether the territoriality requirements of the substantive offence are satisfied.

Postscript

Some time ago I ventured that if the road to hell was paved with good intentions, this was a motorway. The government continues to speed along the duty of care highway.

It may seem like overwrought hyperbole to suggest that the Bill lays waste to several hundred years of fundamental procedural protections for speech. But consider that the presumption against prior restraint appeared in Blackstone’s Commentaries (1769). It endures today in human rights law. That presumption is overturned by legal duties that require proactive monitoring and removal before an independent tribunal has made any determination of illegality.

It is not an answer to say, as the government is inclined to do, that the duties imposed on providers are about systems and processes rather than individual items of content. For the user whose tweet or post is removed, flagged, labelled, throttled, capped or otherwise interfered with as a result of a duty imposed by this legislation, it is only ever about individual items of content.  

[Amended 11 July 2022 to take account of government's proposed Report Stage amendment to illegality duties on "reasonable grounds to infer".]

Saturday, 19 February 2022

Harm Version 4.0 – the Online Safety Bill in metamorphosis

It is time – in fact it is overdue - to take stock of the increasingly imminent Online Safety Bill. The two months before and after Christmas saw a burst of activity: Reports from the Joint Parliamentary Committee scrutinising the draft Bill, from the Commons DCMS Committee on the ‘Legal but Harmful’ issue, and from the House of Commons Petitions Committee on Tackling Online Abuse.

Several Parliamentary debates took place, and recently the DCMS made two announcements: first, that an extended list of priority illegal content would be enacted on the face of the legislation, as would the Law Commission’s recommendations for three modernised communications offences; and second, that age verification would be extended to apply to non-user-to-user pornography sites.

Most recently of all, the Home Secretary is reported to have gained Cabinet support for powers for Ofcom (the regulator that would implement, supervise and enforce the Bill’s provisions) to require use of technology to proactively seek out and remove illegal content and legal content harmful to children.

As the government’s proposals have continued to evolve under the guidance of their sixth Culture Secretary, and with Parliamentary Committees and others weighing in from all directions, you may already be floundering if you have not followed, blow by blow, the progression from the 2017 Internet Safety Strategy Green Paper, via the April 2019 Online Harms White Paper and the May 2021 draft Online Safety Bill, to the recent bout of political jousting.

If you are already familiar with the legal concept of a duty of care, the significance of objective versus subjective harms, the distinction between a duty to avoid causing injury and a duty to prevent others causing injury, and the notion of safety by design, then read on. If not, or if you would like a recap, it’s all in the Annex.

In brief, the draft Bill would impose a new set of legal obligations on an estimated 24,000 UK providers of user to user services (everyone from large social media platforms to messaging services, multiplayer online games and simple discussion forums) and search engines. The government calls these obligations a duty of care.

This post is an unashamedly selective attempt to put in context some of the main threads of the government’s thinking, explain key elements of the draft Bill and pick out a few of the most significant Parliamentary Committee recommendations.

The government’s thinking The proposals bundle together multiple policy strands. Those include:

  • Requiring providers to take steps to prevent, inhibit or respond to illegal user content
  • Requiring providers to take action in respect of ‘legal but harmful’ user content
  • Limiting the freedom of large social media platforms to decide which user content should and should not be on their services.

The government also proposes to enact new and reformed criminal offences for users. These are probably the most coherent aspects of the proposed legislation, yet still have some serious problems – in their own right, in the case of the new harm-based offence, and also in how offences interact with the illegality strand of the duty of care.

Protection of children has been a constant theme, sparking debates about age verification, age assurance and end-to-end encryption. Overall, the government has pursued its quest for online safety under the Duty of Care banner, bolstered with the slogan “What Is Illegal Offline Is Illegal Online”.

That slogan, to be blunt, has no relevance to the draft Bill. Thirty years ago there may have been laws that referred to paper, post, or in some other way excluded electronic communication and online activity. Those gaps were plugged long ago. With the exception of election material imprints (a gap that is being fixed by a different Bill currently going through Parliament), there are no criminal offences that do not already apply online (other than jokey examples like driving a car without a licence).

On the contrary, the draft Bill’s Duty of Care would create novel obligations for both illegal and legal content that have no comparable counterpart offline. The arguments for these duties rest in reality on the premise that the internet and social media are different from offline, not that we are trying to achieve offline-online equivalence.

Strand 1: Preventing and Responding to Illegality

Under the draft Bill, all 24,000 in-scope UGC providers would be placed under a duty of care (so-called) in respect of illegal user content.  The duty would be reactive or proactive, depending on the kind of illegality involved. Illegality for this purpose means criminal offences.

The problem with applying the duty of care label to this obligation is that there is no necessary connection between safety (in the duty of care sense of risk of personal injury) and illegality. Some criminal law is safety-related and some is not. We may be tempted to talk of being made safe from illegality, but that is not safety in its proper duty of care sense.

In truth, the illegality duty appears to stem not from any legal concept of a duty of care, but from a broader argument that platforms have a moral responsibility to take positive steps to prevent criminal activity by users on their services. That contrasts with merely being incentivised to remove user content on becoming aware that it is unlawful. The latter is the position of a host under the existing intermediary liability regime, with which the proposed positive legal duty would co-exist.

That moral framing may explain why the DCMS Minister was able to say to a recent Parliamentary Committee:

“I think there is absolute unanimity that the Bill’s position on that is the right position: if it is illegal offline it is illegal online and there should be a duty on social media firms to stop it happening. There is agreement on that.” (1 Feb 2022, Commons DCMS Sub-Committee on Online Harms and Disinformation)

It is true that the illegality safety duty has received relatively little attention compared with the furore over the draft Bill’s ‘legal but harmful’ provisions. Even then, the consensus to which the Minister alludes may not be quite so firm. It may seem obvious that illegal content should be removed, but that overlooks the fact that the draft Bill would require removal without any independent adjudication of illegality. That contradicts the presumption against prior restraint that forms a core part of traditional procedural protections for freedom of expression.  To the extent that the duty requires hosts to monitor for illegality, that departs from the long-standing principle embodied in Article 15 of the eCommerce Directive prohibiting the imposition of general monitoring obligations.

It is noteworthy that the DCMS Committee Report recommends ([21]) that takedown should not be the only option to fulfil the illegality safety duty, but measures such as tagging should be available.

So an unbounded notion of preventing illegality does not sit well on the offline duty of care foundation of risk of physical injury. Difficult questions arise as a result. Should the duty apply to all kinds of criminal offence capable of being committed online? Or, more closely aligned with offline duties of care, should it be limited strictly to safety-related criminal offences? Or perhaps to risk of either physical injury or psychological harm? Or, more broadly, to offences for which it can be said that the individual is a victim?

The extent to which over time the government’s proposals have fluctuated between several of these varieties of illegality perhaps reflects the difficulty of shoehorning this kind of duty into a legal box labelled ‘duty of care’.

Moving on from the scope of illegality, what would the draft Bill require U2U providers to do? Under the draft Bill, for ‘ordinary’ illegal content the safety duty would be reactive – to remove it on receiving notice. For ‘priority’ illegal content the duty would in addition be preventative: as the DCMS described it in their recent announcement of new categories of priority illegal content:

“To proactively tackle the priority offences, firms will need to make sure the features, functionalities and algorithms of their services are designed to prevent their users encountering them and minimise the length of time this content is available. This could be achieved by automated or human content moderation, banning illegal search terms, spotting suspicious users and having effective systems in place to prevent banned users opening new accounts.”

These kinds of duty prompt questions about how a platform is to decide what is and is not illegal, or (apparently) who is a suspicious user. The draft Bill provides that the illegality duty should be triggered by ‘reasonable grounds to believe’ that the content is illegal. It could have adopted a much higher threshold: manifestly illegal on the face of the content, for instance. The lower the threshold, the greater the likelihood of legitimate content being removed at scale, whether proactively or reactively.

The draft Bill raises serious (and already well-known, in the context of existing intermediary liability rules) concerns of likely over-removal through mandating platforms to detect, adjudge and remove illegal material on their systems. Those are exacerbated by adoption of the ‘reasonable grounds to believe’ threshold.

Current state of play The government’s newest list of priority offences (those to which the proactive duty would apply) mostly involves individuals as victims but also includes money laundering, an offence which does not do so. The list includes revenge and extreme pornography, as to which the Joint Scrutiny Committee observed that the first is an offence against specific individuals, whereas the second is not.

Given how broadly the priority offences are now ranging, it may be a reasonable assumption that the government does not intend to limit them to conduct that would carry a risk of physical or psychological harm to a victim.

The government intends that its extended list of priority offences would be named on the face of the Bill. That goes some way towards meeting criticism by the Committees of leaving that to secondary legislation. However, the government has not said that the power to add to the list by secondary legislation would be removed.

As to the threshold that would trigger the duty, the Joint Scrutiny Committee has said that it is content with ‘reasonable grounds to believe’ so long as certain safeguards are in place that would render the duty compatible with an individual’s right to free speech; and so long as service providers are required to apply the test in a proportionate manner set out in clear and accessible terms to users of the service.

The Joint Committee’s specific suggested safeguard is that Ofcom should issue a binding Code of Practice on identifying, reporting on and acting on illegal content. The Committee considers that Ofcom’s own obligation to comply with human rights legislation would provide an additional safeguard for freedom of expression in how providers fulfil this requirement. How much comfort one should take from that, when human rights legislation sets only the outer boundaries of acceptable conduct by the state, is debatable.

The Joint Committee also refers to other safeguards proposed elsewhere in its report. Identifying exactly which it is referring to in the context of illegality is not easy. Most probably, it is referring to those listed at [284], at least insofar as they relate to the illegality safety duty.

The Committee proposes these as a more effective alternative to strengthening the ‘have regard to the importance of freedom of expression’ duty in Clause 12 of the draft Bill:

  • greater independence for Ofcom ([377])
  • routes for individual redress beyond service providers ([457])
  • tighter definitions around content that creates a risk of harm ([176] (adults), [202] (children))
  • a greater emphasis on safety by design ([82])
  • a broader requirement to be consistent in the applications of terms of service
  • stronger minimum standards ([184])
  • mandatory codes of practice set by Ofcom, who are required to be compliant with human rights law (generally [358]; illegal content [144]; content in the public interest [307])
  • stronger protections for news publisher content ([304])

It is not always obvious how some of these recommendations (such as increased emphasis on safety by design) qualify as freedom of expression safeguards.

For its part, the DCMS Committee has suggested ([12]) that the definition of illegal content should be reframed to explicitly add the need to consider context as a factor. How providers should go about obtaining such contextual information - much of which will be outside the contents of user posts – is unclear. The recommendation also has implications in the degree of surveillance and breadth of analysis of user communications that would be necessary to fulfil the duty.

Content in the public interest The Joint Committee recommends a revised approach to the draft Bill’s protections for journalistic content and content of democratic importance. ([307]) At present these qualifications to the illegality and legal but harmful duties would apply only to Category 1 service providers. However, the Committee also recommends (at [246]) replacing strict categories based on size and functionality with a risk-based sliding scale, which would determine which statutory duties apply to which providers. (The government has told the Petitions Committee that it is considering changing the Category 1 qualification from size and functionality to size or functionality.)

The Joint Committee relies significantly on this recommendation, under the heading of ‘protecting high value speech’. It proposes to replace the existing journalism and content of democratic importance protections with a single statutory requirement to have proportionate systems and processes to protect ‘content where there are reasonable grounds to believe it will be in the public interest’ ([307]). It gives the examples of journalistic content, contributions to political or societal debate and whistleblowing as being likely to be in the public interest.

Ofcom would be expected to produce a binding Code of Practice on steps to be taken to protect such content and guidance on what is likely to be in the public interest, based on their existing experience and caselaw.

As with the existing proposed protections, the ‘public interest’ proposal appears to be intended to apply across the board to both illegality and legal but harmful content (see, for instance, the Committee’s discussion at [135] in relation to the Law Commission’s proposed new ‘harm-based’ communications offence). This proposal is discussed under Strand 3 below.

Strand 2 - Legal but harmful

The most heavily debated aspect of the government’s proposals has been the ‘legal but harmful content’ duty. In the draft Bill this comes in two versions: a substantive duty to mitigate user content harmful to children; and a transparency duty in relation to user content harmful to adults. That, at any rate, appears to be the government’s political intention. As drafted, the Bill could be read as going further and imposing a substantive ‘content harmful to adults’ duty (something that at least some of the Committees want the legislation explicitly to do).

Compared with an illegality duty, the legal but harmful duty is conceptually closer to a duty of care properly so called. As a species of duty to take care to avoid harm to others, it at least inhabits approximately the same universe. However, the similarity stops there. It is a duty of care detached from its moorings (risk of objectively ascertainable physical injury) and then extended into a duty to prevent other people harming each other. As such, like the illegality duty, it has no comparable equivalent in the offline world; and again, as with the illegality duty, any concept of risk-creating activity by providers is stretched and homeopathically diluted to encompass mere facilitation of individual public speech.

Those features make the legal but harmful duty a categorically different kind of obligation from analogous offline duties of care; one that – at least if framed as a substantive obligation - is difficult to render compliant with a human rights framework, due to the inherently vague notions of harm that inevitably come into play once harm is extended beyond risk of objectively ascertainable physical injury.

This problem has bedevilled the Online Harms proposals from the start. The White Paper (Harm V.1) left harm undefined, which would have empowered Ofcom to write an alternative statute book to govern online speech. The Full Consultation Response (Harm V.2) defined harm as “reasonably foreseeable risk of a significant adverse physical or psychological impact on individuals”. The draft Bill (Harm V.3) spans the gamut, from undefined (for priority harmful content) to physical or psychological harm (general definition) to a complex cascade of definitions starting with the “adult (or child) of ordinary sensibilities” for residual non-priority harmful content.

If harm includes subjectively perceived harm, then it is likely to embody a standard of the most easily offended reader and to require platforms to make decisions based on impossibly vague criteria and unascertainable factual context.

The debate has not been helped by a common tendency to refer to ‘risk’ in the abstract, without identifying what counts and, just as importantly, does not count as harm. Everyday expressions such as ‘harm’, ‘abuse’, ‘trolling’ and so on may suffice for political debate. But legislation has to grapple with the uncomfortable question of what kinds of lawful but controversial and unpleasant speech should not qualify as harmful. That is a question that a lawmaker cannot avoid if legislation is to pass the ‘clear and precise’ rule of law test.  

Even when a list is proposed it still tends to be pitched at a level that can leave basic questions unanswered. The Joint Committee, for instance, proposes a list including ‘abuse, harassment or stirring up of violence or hatred based on the protected characteristics in the Equality Act 2010’, and “content or activity likely to cause harm amounting to significant psychological distress to a likely audience (defined in line with the Law Commission offence)”.

On that basis does blasphemy count as legal but harmful content? Does the Committee’s proposed list of specific harms answer that question? Some would certainly claim to suffer significant psychological distress from reading blasphemous material.  Religion or belief is a protected characteristic under the Equality Act. How would that be reconciled with the countervailing duty to take into account the importance of freedom of expression within the law or, as the Joint Committee would propose for high risk platforms, to assess the public interest in high value speech under the guidance of Ofcom?

If none of these provides a clear answer, the result is to delegate the decision-making to Ofcom. That prompts the question whether such a controversial decision as to what speech is or is not permissible online should be made, and made in clear terms, by Parliament.

While on the topic of delegation, let us address the proposition that the draft Bill’s ‘legal but harmful to adults’ duty delegates a state power to platforms. The Joint Committee report has an entire section entitled ‘Delegation of decision making’ ([165] to [169]).

At present, service providers have freedom to decide what legal content to allow or not on their platforms, and to make their own rules accordingly. That does not involve any delegation of state power, any more than Conway Hall exercises delegated state power when it decides on its venue hiring policy. Unless and until the state chooses to take a power via legislation, there is no state power capable of delegation.

Clause 11 (if we take at face value what the government says it is) requires platforms to provide users with information about certain of their decisions, and to enforce their rules consistently. Again, the state has not taken any power (either direct or via Ofcom) to instruct providers what rules to make. No state power, no delegation.

It is only when (as at least some Committees propose) the state takes a power to direct or govern decision-making that delegation is involved. Such a power would be delegated to Ofcom. Providers are then obligated to enforce the Bill’s and Ofcom’s rules against users. That involves providers in making decisions about what content contravenes the rules.  There is still no delegation of rule-making, except to the extent that latitude, vagueness or ambiguity in those rules results in de facto delegation of rule-making to the providers.

Current State of Play None of the Committees has accepted the submissions from a number of advocacy groups (and the previous Lords Committee Report on Freedom of Expression in the Digital Age) that ‘legal but harmful to adults’ obligations should be dropped from the legislation.

However, each Committee has put forward its own alternative formulation:

  • The Joint Committee’s list of reasonably foreseeable risks of harm that providers should be required to identify and mitigate (replacing the draft Bill’s transparency duty with a substantive mitigation duty) ([176]), as part of an overall package of recommended changes
  • The Petitions Committee’s recommendation that the primary legislation should contain as comprehensive an indication as possible of what content would be considered harmful to adults or children; and that abuse based on characteristics protected under the Equality Act and hate crime legislation should be designated as priority harmful content in the primary legislation. This Committee also considers that the legal but harmful duty should be a substantive mitigation duty. ([46], [67])
  • The DCMS Committee’s recommendation (similar to the Joint Committee) that the definition of (legal) content that is harmful to adults should be reframed to apply to reasonably foreseeable harms identified in risk assessments ([20]); This sits alongside a proposal that providers be positively required to balance their safety duties with freedom of expression ([19]); and that providers should be required to assess and take into account context, the position of the speaker, the susceptibility of the audience and the content’s accuracy. ([20]) This Committee appears also, at least implicitly, to support conversion into a substantive duty.

The DCMS Committee also recommends that the definition of legal content harmful to adults should: “explicitly include content that undermines, or risks undermining, the rights or reputation of others, national security, public order and public health or morals, as also established in international human rights law.”

On the face of it this is a strange proposal. The listed items are aims in pursuance of which, according to international human rights law, a state may if it so wishes restrict freedom of expression - subject to the restriction being prescribed by law (i.e. by clear and certain rules), necessary for the achievement of that aim, and proportionate.

The listed aims do not themselves form a set of clear and precise substantive rules, and are not converted into such by the device of adding ‘undermines, or risks undermining’. The result is a unfeasibly vague formulation. Moreover, it appears to suggest that every kind of speech that can legitimately be restricted under international human rights law, should be.  It is difficult to believe that the Committee really intends that. 

The various Committee proposals illustrate how firmly the draft Bill is trapped between the twin devils of over-removal via the blunt instrument of a content-oriented safety duty; and of loading onto intermediaries the obligation to make ever finer and more complex multi-factorial judgements about content. The third propounded alternative of safety by design has its own vice of potentially interfering with all content, good and bad alike.

Strand 3 - Reduce the discretion of large social media platforms to decide what content should and should not be on their services

Until very late in the consultation process the focus of the government’s Online Harms proposals was entirely on imposing duties on providers to prevent harm by their users, with the consequent potential for over-removal of user content mitigated to some degree by a duty to have regard to the importance of freedom of expression within the law. This kind of proposal sought to leverage the abilities of platforms to act against user content.

When the Full Response was published a new strand was evident: seeking to rein in the ability of large platforms to decide what content should and should not be present on their services. It is possible that this may have been prompted by events such as suspension of then President Trump’s Twitter account.

Be that as it may, the Full Response and now the draft Bill include provisions, applicable to Category 1 U2U providers, conferring special protections on journalistic content and content of democratic importance.  The most far-reaching protections relate to content of democratic importance. For such content the provider must not only ensure that it has systems and processes designed to ensure the importance of free expression of such content when making certain decisions (such as takedown, restriction or action against a user), but ensure that they apply in the same way to a diversity of political opinion. Whatever the merits and demerits of such proposals, they are far removed from the original policy goal of ensuring user safety.

Current state of play As noted above, the Joint Committee proposes that the journalistic content and content of democratic importance be replaced by a single statutory requirement to have proportionate systems and processes to protect ‘content where there are reasonable grounds to believe it will be in the public interest’ ([307]) The DCMS Committee recommendation on the scope of legal but harmful content recommends including democratic importance and journalistic nature when considering the context of content ([23]). 

Although the Committee’s discussion is about protecting ‘high value speech’, there is a risk involved in generalising this protection to the kind of single statutory safeguard for ‘content in the public interest’ envisaged by the Committee. The risk is that in practice the safeguard would be turned on its head – with the result that only limited categories of ‘high value speech’ would be seen as presumptively qualifying for protection from interference, leaving ‘low value’ speech to justify itself and in reality shorn of protection.

That is the error that Warby L.J. identified in Scottow, a prosecution under S.127 Communications Act 2003:

“The Crown evidently did not appreciate the need to justify the prosecution, but saw it as the defendant's task to press the free speech argument. The prosecution argument failed entirely to acknowledge the well-established proposition that free speech encompasses the right to offend, and indeed to abuse another. The Judge appears to have considered that a criminal conviction was merited for acts of unkindness, and calling others names, and that such acts could only be justified if they made a contribution to a "proper debate".  … It is not the law that individuals are only allowed to make personal remarks about others online if they do so as part of a "proper debate". 

In the political arena, the presumption that anything unpleasant or offensive is prima facie to be condemned can be a powerful one. The 10 December 2021 House of Lords debate on freedom of speech was packed with pleas to be nicer to each other online: hard to disagree with as a matter of etiquette. But if being unpleasant is thought of itself to create a presumption against freedom of expression, that does not reflect human rights law.

The risk of de facto reversal of the presumption in favour of protection of speech when we focus on protecting ‘high value’ speech is all the greater where platforms are expected to act in pursuance of their safety duty proactively, in near real-time and at scale, against a duty-triggering threshold of reasonable grounds to believe. 

That is without even considering the daunting prospect of an AI algorithm that claims to be capable of assessing the public interest.

Strand 4. Create new and reformed criminal offences that would apply directly to users

In parallel with the government’s proposals for an online duty of care, the Law Commission has been conducting two projects looking at the criminal law as it affects online and other communications: Modernising Communications Offences (Law Com No 399, 21 July 2021); Hate Crime Laws (LawCom No 402, 7 December 2021).

The communications offences report recommended:

  • A new harm-based communications offence to replace S.127(1) Communications Act 2003 and the Malicious Communications Act 1988
  • A new offence of encouraging or assisting serious self-harm
  • A new offence of cyberflashing; and
  • New offences of sending knowingly false, persistent or threatening communications, to replace S.127(2) Communications Act 2003

It also recommended that the government consider legislating to criminalise maliciously sending flashing images to known sufferers of epilepsy. It was not persuaded that specific offences of pile-on harassment or glorification of violent crime would be necessary, effective or desirable.

The hate crime report made a complex series of recommendations, including extending the existing ‘stirring up’ offences to cover hatred on grounds of sex or gender. It recommended that if the draft Online Safety Bill becomes law, inflammatory hate material should be included as ‘priority illegal content’ and the stirring up offences should not apply to social media companies and other platforms in respect of user to user content unless intent to stir up hatred on the part of the provider could be proved.

It also recommended that the government undertake a review of the need for a specific offence of public sexual harassment (covering both online and offline).

The government has said in an interim response to the communications offences report that it proposes to include three of the recommended offences in the Bill: the harm-based communications offence, the false communications offence and the threatening communications offence. The remainder are under consideration. The hate crime report awaits an interim response.

From the point of view of the safety duties under the Online Safety Bill, the key consequence of new offences is that the dividing line between the illegality duty and the ‘legal but harmful’ duties would shift. However, the ‘reasonable grounds to believe threshold would not change, and would apply to the new offences as it does to existing offences.

The Petitions Committee acknowledged concerns over how the proposed harm-based offence would intersect with the illegality duties:

“The Law Commission is right to recommend refocusing online communications offences onto the harm abusive  messages can  cause to victims. We welcome the Government’s commitment to adopt the proposed threatening and ‘harm-based’ communications offences. However, we also acknowledge the uncertainty and hesitation of some witnesses about how the new harm-based offence will be interpreted in practice, including the role of social media companies and other online platforms in identifying this content—as well as other witnesses’ desire for the law to deal with more cases of online abuse more strongly.”

It recommended monitoring the effectiveness of the offences and that the government should publish an initial review of the workings and impact of any new communications offences within the first two years after they come into force.

The Joint Committee supported the Law Commission recommendations. It also suggested that concerns about ambiguity and the context-dependent nature of the proposed harm-based offence could be addressed through the statutory public interest requirement discussed above. [135]

Annex

What is a duty of care?

In its proper legal sense a duty of care is a duty to take reasonable care to avoid injuring other people– that is why it is called a duty of care. It is not a duty to prevent other people breaking the law. Nor (other than exceptionally) is it a duty to prevent other people injuring each other. Still less is it a duty to prevent other people speaking harshly to each other.

A duty of care exists in the common law of negligence and occupier’s liability. Analogous duties exist in regulatory contexts such as health and safety law.  A duty of care does not, however, mean that everyone owes a duty to avoid causing any kind of harm to anyone else in any situation. Quite the reverse. The scope of a duty of care is limited by factors such as kinds of injury, causation, foreseeability and others.

In particular, for arms-length relationships such as property owner and visitor (the closest analogy to platform and user) the law carefully restricts safety-related duties of care to objectively ascertainable kinds of harm: physical injury and damage to property.

Objective injury v subjective harm  Once we move into subjective speech harms the law is loath to impose a duty. The UK Supreme Court held in Rhodes that the author of a book owes no duty to avoid causing distress to a potential reader of the book. It said:

“It is difficult to envisage any circumstances in which speech which is not deceptive, threatening or possibly abusive, could give rise to liability in tort for wilful infringement of another’s right to personal safety. The right to report the truth is justification in itself. That is not to say that the right of disclosure is absolute … . But there is no general law prohibiting the publication of facts which will cause distress to another, even if that is the person’s intention.” [77]

That is the case whether the author sells one book or a million, and whether the book languishes in obscurity or is advertised on the side of every bus and taxi.

The source of some of the draft Bill’s most serious problems lies in the attempt to wrench the concept of a safety-related duty of care out of its offline context – risk of physical injury - and apply it to the contested, subjectively perceived claims of harm that abound in the context of speech.

In short, speech is not a tripping hazard. Treating it as such propels us ultimately into the territory of claiming that speech is violence: a proposition that reduces freedom of expression to a self-cancelling right.

Speech is protected as a fundamental right. Some would say it is the right that underpins all other rights. It is precisely because speech is not violence that Berkeley students enjoy the right to display placards proclaiming that speech is violent. The state is – or should be - powerless to prevent them, however wrong-headed their message.

Quite how, on the nature of speech, a Conservative government has ended up standing shoulder to shoulder with those Berkeley students is one of the ineffable mysteries of politics. 

Causing v preventing Even where someone is under a duty to avoid causing physical injury to others, that does not generally include a duty to prevent them from injuring each other. Exceptionally, such a preventative duty can (but does not necessarily) arise, for instance where the occupier of property does something that creates a risk of that happening. Serving alcohol on the premises, or using property for a public golf course, would be an example. Absent that, or a legally close relationship (such as teacher-pupil) or an assumption of responsibility, there is no duty. Even less would any preventative duty exist for what visitors say to each other on the property.

The duty proposed to be imposed on UGC platforms is thus doubly removed from offline duties of care. First, it would extend far beyond physical injury into subjective harms. Second, the duty consists in the platform being required to prevent or restrict how users behave to each other.

It might be argued that some activities (around algorithms, perhaps) are liable to create risks that, by analogy with offline, could justify imposing a preventative duty. That at least would frame the debate around familiar principles, even if the kind of harm involved remained beyond bounds.

Had the online harms debate been conducted in those terms, the logical conclusion would be that platforms that do not do anything to create relevant risks should be excluded from scope. But that is not how it has proceeded. True, much of the political rhetoric has focused on Big Tech and Evil Algorithm. But the draft Bill goes much further than that. It assumes that merely facilitating individual public speech by providing an online platform, however basic that might be, is an inherently risk-creating activity that justifies imposition of a duty of care. That proposition upends the basis on which speech is protected as a fundamental right.

Safety by design It may be suggested that by designing in platform safety features from the start it is possible to reduce or eliminate risk, while avoiding the problems of detecting, identifying and moderating particular kinds of illegal or harmful content.

It is true that some kinds of safety feature – a reporting button, for instance – do not entail  any kind of content moderation. However, risk is not a self-contained concept. We always have to ask: “risk of what?” If the answer is “risk of people encountering illegal or harmful content”, at first sight that takes the platform back towards trying to distinguish permissible from impermissible content. However, that is not necessarily so.

A typical example of safety by design concerns amplification. It is suggested that platforms should be required to design in ‘friction’ features that inhibit sharing and re-sharing of content, especially at scale.

The problem with a content-agnostic approach such as this is that it inevitably strikes at all content alike (although it would no doubt be argued the overall impact of de-amplification is skewed towards ‘bad’ content since that is more likely to be shared and re-shared).

However, the content-agnostic position is rarely maintained rigorously, often reverting to discussion of ways of preventing amplification of illegal or harmful content (which takes us back to identifying and moderating such content). An example of this can be seen in Joint Committee recommendation 82(e):

“Risks created by virality and the frictionless sharing of content at scale, mitigated by measures to create friction, slow down sharing whilst viral content is moderated, require active moderation in groups over a certain size…”

Criticism of amplification is encapsulated in the slogan ‘freedom of speech is not freedom of reach’. As a matter of human rights law, however, interference with the reach of communications certainly engages the right of freedom of expression. As the Indian Supreme Court held in January 2020:

“There is no dispute that freedom of speech and expression includes the right to disseminate information to as wide a section of the population as is possible. The wider range of circulation of information or its greater impact cannot restrict the content of the right nor can it justify its denial."

Broadcast regulation The model adopted by the draft Bill is discretionary regulation by regulator, rather than regulation by the general law. Whether discretionary broadcast-style regulation is an appropriate model for individual speech is a debate in its own right.

[Grammatical correction 19 Feb 2022]