Saturday 19 February 2022

Harm Version 4.0 – the Online Safety Bill in metamorphosis

It is time – in fact it is overdue - to take stock of the increasingly imminent Online Safety Bill. The two months before and after Christmas saw a burst of activity: Reports from the Joint Parliamentary Committee scrutinising the draft Bill, from the Commons DCMS Committee on the ‘Legal but Harmful’ issue, and from the House of Commons Petitions Committee on Tackling Online Abuse.

Several Parliamentary debates took place, and recently the DCMS made two announcements: first, that an extended list of priority illegal content would be enacted on the face of the legislation, as would the Law Commission’s recommendations for three modernised communications offences; and second, that age verification would be extended to apply to non-user-to-user pornography sites.

Most recently of all, the Home Secretary is reported to have gained Cabinet support for powers for Ofcom (the regulator that would implement, supervise and enforce the Bill’s provisions) to require use of technology to proactively seek out and remove illegal content and legal content harmful to children.

As the government’s proposals have continued to evolve under the guidance of their sixth Culture Secretary, and with Parliamentary Committees and others weighing in from all directions, you may already be floundering if you have not followed, blow by blow, the progression from the 2017 Internet Safety Strategy Green Paper, via the April 2019 Online Harms White Paper and the May 2021 draft Online Safety Bill, to the recent bout of political jousting.

If you are already familiar with the legal concept of a duty of care, the significance of objective versus subjective harms, the distinction between a duty to avoid causing injury and a duty to prevent others causing injury, and the notion of safety by design, then read on. If not, or if you would like a recap, it’s all in the Annex.

In brief, the draft Bill would impose a new set of legal obligations on an estimated 24,000 UK providers of user to user services (everyone from large social media platforms to messaging services, multiplayer online games and simple discussion forums) and search engines. The government calls these obligations a duty of care.

This post is an unashamedly selective attempt to put in context some of the main threads of the government’s thinking, explain key elements of the draft Bill and pick out a few of the most significant Parliamentary Committee recommendations.

The government’s thinking The proposals bundle together multiple policy strands. Those include:

  • Requiring providers to take steps to prevent, inhibit or respond to illegal user content
  • Requiring providers to take action in respect of ‘legal but harmful’ user content
  • Limiting the freedom of large social media platforms to decide which user content should and should not be on their services.

The government also proposes to enact new and reformed criminal offences for users. These are probably the most coherent aspects of the proposed legislation, yet still have some serious problems – in their own right, in the case of the new harm-based offence, and also in how offences interact with the illegality strand of the duty of care.

Protection of children has been a constant theme, sparking debates about age verification, age assurance and end-to-end encryption. Overall, the government has pursued its quest for online safety under the Duty of Care banner, bolstered with the slogan “What Is Illegal Offline Is Illegal Online”.

That slogan, to be blunt, has no relevance to the draft Bill. Thirty years ago there may have been laws that referred to paper, post, or in some other way excluded electronic communication and online activity. Those gaps were plugged long ago. With the exception of election material imprints (a gap that is being fixed by a different Bill currently going through Parliament), there are no criminal offences that do not already apply online (other than jokey examples like driving a car without a licence).

On the contrary, the draft Bill’s Duty of Care would create novel obligations for both illegal and legal content that have no comparable counterpart offline. The arguments for these duties rest in reality on the premise that the internet and social media are different from offline, not that we are trying to achieve offline-online equivalence.

Strand 1: Preventing and Responding to Illegality

Under the draft Bill, all 24,000 in-scope UGC providers would be placed under a duty of care (so-called) in respect of illegal user content.  The duty would be reactive or proactive, depending on the kind of illegality involved. Illegality for this purpose means criminal offences.

The problem with applying the duty of care label to this obligation is that there is no necessary connection between safety (in the duty of care sense of risk of personal injury) and illegality. Some criminal law is safety-related and some is not. We may be tempted to talk of being made safe from illegality, but that is not safety in its proper duty of care sense.

In truth, the illegality duty appears to stem not from any legal concept of a duty of care, but from a broader argument that platforms have a moral responsibility to take positive steps to prevent criminal activity by users on their services. That contrasts with merely being incentivised to remove user content on becoming aware that it is unlawful. The latter is the position of a host under the existing intermediary liability regime, with which the proposed positive legal duty would co-exist.

That moral framing may explain why the DCMS Minister was able to say to a recent Parliamentary Committee:

“I think there is absolute unanimity that the Bill’s position on that is the right position: if it is illegal offline it is illegal online and there should be a duty on social media firms to stop it happening. There is agreement on that.” (1 Feb 2022, Commons DCMS Sub-Committee on Online Harms and Disinformation)

It is true that the illegality safety duty has received relatively little attention compared with the furore over the draft Bill’s ‘legal but harmful’ provisions. Even then, the consensus to which the Minister alludes may not be quite so firm. It may seem obvious that illegal content should be removed, but that overlooks the fact that the draft Bill would require removal without any independent adjudication of illegality. That contradicts the presumption against prior restraint that forms a core part of traditional procedural protections for freedom of expression.  To the extent that the duty requires hosts to monitor for illegality, that departs from the long-standing principle embodied in Article 15 of the eCommerce Directive prohibiting the imposition of general monitoring obligations.

It is noteworthy that the DCMS Committee Report recommends ([21]) that takedown should not be the only option to fulfil the illegality safety duty, but measures such as tagging should be available.

So an unbounded notion of preventing illegality does not sit well on the offline duty of care foundation of risk of physical injury. Difficult questions arise as a result. Should the duty apply to all kinds of criminal offence capable of being committed online? Or, more closely aligned with offline duties of care, should it be limited strictly to safety-related criminal offences? Or perhaps to risk of either physical injury or psychological harm? Or, more broadly, to offences for which it can be said that the individual is a victim?

The extent to which over time the government’s proposals have fluctuated between several of these varieties of illegality perhaps reflects the difficulty of shoehorning this kind of duty into a legal box labelled ‘duty of care’.

Moving on from the scope of illegality, what would the draft Bill require U2U providers to do? Under the draft Bill, for ‘ordinary’ illegal content the safety duty would be reactive – to remove it on receiving notice. For ‘priority’ illegal content the duty would in addition be preventative: as the DCMS described it in their recent announcement of new categories of priority illegal content:

“To proactively tackle the priority offences, firms will need to make sure the features, functionalities and algorithms of their services are designed to prevent their users encountering them and minimise the length of time this content is available. This could be achieved by automated or human content moderation, banning illegal search terms, spotting suspicious users and having effective systems in place to prevent banned users opening new accounts.”

These kinds of duty prompt questions about how a platform is to decide what is and is not illegal, or (apparently) who is a suspicious user. The draft Bill provides that the illegality duty should be triggered by ‘reasonable grounds to believe’ that the content is illegal. It could have adopted a much higher threshold: manifestly illegal on the face of the content, for instance. The lower the threshold, the greater the likelihood of legitimate content being removed at scale, whether proactively or reactively.

The draft Bill raises serious (and already well-known, in the context of existing intermediary liability rules) concerns of likely over-removal through mandating platforms to detect, adjudge and remove illegal material on their systems. Those are exacerbated by adoption of the ‘reasonable grounds to believe’ threshold.

Current state of play The government’s newest list of priority offences (those to which the proactive duty would apply) mostly involves individuals as victims but also includes money laundering, an offence which does not do so. The list includes revenge and extreme pornography, as to which the Joint Scrutiny Committee observed that the first is an offence against specific individuals, whereas the second is not.

Given how broadly the priority offences are now ranging, it may be a reasonable assumption that the government does not intend to limit them to conduct that would carry a risk of physical or psychological harm to a victim.

The government intends that its extended list of priority offences would be named on the face of the Bill. That goes some way towards meeting criticism by the Committees of leaving that to secondary legislation. However, the government has not said that the power to add to the list by secondary legislation would be removed.

As to the threshold that would trigger the duty, the Joint Scrutiny Committee has said that it is content with ‘reasonable grounds to believe’ so long as certain safeguards are in place that would render the duty compatible with an individual’s right to free speech; and so long as service providers are required to apply the test in a proportionate manner set out in clear and accessible terms to users of the service.

The Joint Committee’s specific suggested safeguard is that Ofcom should issue a binding Code of Practice on identifying, reporting on and acting on illegal content. The Committee considers that Ofcom’s own obligation to comply with human rights legislation would provide an additional safeguard for freedom of expression in how providers fulfil this requirement. How much comfort one should take from that, when human rights legislation sets only the outer boundaries of acceptable conduct by the state, is debatable.

The Joint Committee also refers to other safeguards proposed elsewhere in its report. Identifying exactly which it is referring to in the context of illegality is not easy. Most probably, it is referring to those listed at [284], at least insofar as they relate to the illegality safety duty.

The Committee proposes these as a more effective alternative to strengthening the ‘have regard to the importance of freedom of expression’ duty in Clause 12 of the draft Bill:

  • greater independence for Ofcom ([377])
  • routes for individual redress beyond service providers ([457])
  • tighter definitions around content that creates a risk of harm ([176] (adults), [202] (children))
  • a greater emphasis on safety by design ([82])
  • a broader requirement to be consistent in the applications of terms of service
  • stronger minimum standards ([184])
  • mandatory codes of practice set by Ofcom, who are required to be compliant with human rights law (generally [358]; illegal content [144]; content in the public interest [307])
  • stronger protections for news publisher content ([304])

It is not always obvious how some of these recommendations (such as increased emphasis on safety by design) qualify as freedom of expression safeguards.

For its part, the DCMS Committee has suggested ([12]) that the definition of illegal content should be reframed to explicitly add the need to consider context as a factor. How providers should go about obtaining such contextual information - much of which will be outside the contents of user posts – is unclear. The recommendation also has implications in the degree of surveillance and breadth of analysis of user communications that would be necessary to fulfil the duty.

Content in the public interest The Joint Committee recommends a revised approach to the draft Bill’s protections for journalistic content and content of democratic importance. ([307]) At present these qualifications to the illegality and legal but harmful duties would apply only to Category 1 service providers. However, the Committee also recommends (at [246]) replacing strict categories based on size and functionality with a risk-based sliding scale, which would determine which statutory duties apply to which providers. (The government has told the Petitions Committee that it is considering changing the Category 1 qualification from size and functionality to size or functionality.)

The Joint Committee relies significantly on this recommendation, under the heading of ‘protecting high value speech’. It proposes to replace the existing journalism and content of democratic importance protections with a single statutory requirement to have proportionate systems and processes to protect ‘content where there are reasonable grounds to believe it will be in the public interest’ ([307]). It gives the examples of journalistic content, contributions to political or societal debate and whistleblowing as being likely to be in the public interest.

Ofcom would be expected to produce a binding Code of Practice on steps to be taken to protect such content and guidance on what is likely to be in the public interest, based on their existing experience and caselaw.

As with the existing proposed protections, the ‘public interest’ proposal appears to be intended to apply across the board to both illegality and legal but harmful content (see, for instance, the Committee’s discussion at [135] in relation to the Law Commission’s proposed new ‘harm-based’ communications offence). This proposal is discussed under Strand 3 below.

Strand 2 - Legal but harmful

The most heavily debated aspect of the government’s proposals has been the ‘legal but harmful content’ duty. In the draft Bill this comes in two versions: a substantive duty to mitigate user content harmful to children; and a transparency duty in relation to user content harmful to adults. That, at any rate, appears to be the government’s political intention. As drafted, the Bill could be read as going further and imposing a substantive ‘content harmful to adults’ duty (something that at least some of the Committees want the legislation explicitly to do).

Compared with an illegality duty, the legal but harmful duty is conceptually closer to a duty of care properly so called. As a species of duty to take care to avoid harm to others, it at least inhabits approximately the same universe. However, the similarity stops there. It is a duty of care detached from its moorings (risk of objectively ascertainable physical injury) and then extended into a duty to prevent other people harming each other. As such, like the illegality duty, it has no comparable equivalent in the offline world; and again, as with the illegality duty, any concept of risk-creating activity by providers is stretched and homeopathically diluted to encompass mere facilitation of individual public speech.

Those features make the legal but harmful duty a categorically different kind of obligation from analogous offline duties of care; one that – at least if framed as a substantive obligation - is difficult to render compliant with a human rights framework, due to the inherently vague notions of harm that inevitably come into play once harm is extended beyond risk of objectively ascertainable physical injury.

This problem has bedevilled the Online Harms proposals from the start. The White Paper (Harm V.1) left harm undefined, which would have empowered Ofcom to write an alternative statute book to govern online speech. The Full Consultation Response (Harm V.2) defined harm as “reasonably foreseeable risk of a significant adverse physical or psychological impact on individuals”. The draft Bill (Harm V.3) spans the gamut, from undefined (for priority harmful content) to physical or psychological harm (general definition) to a complex cascade of definitions starting with the “adult (or child) of ordinary sensibilities” for residual non-priority harmful content.

If harm includes subjectively perceived harm, then it is likely to embody a standard of the most easily offended reader and to require platforms to make decisions based on impossibly vague criteria and unascertainable factual context.

The debate has not been helped by a common tendency to refer to ‘risk’ in the abstract, without identifying what counts and, just as importantly, does not count as harm. Everyday expressions such as ‘harm’, ‘abuse’, ‘trolling’ and so on may suffice for political debate. But legislation has to grapple with the uncomfortable question of what kinds of lawful but controversial and unpleasant speech should not qualify as harmful. That is a question that a lawmaker cannot avoid if legislation is to pass the ‘clear and precise’ rule of law test.  

Even when a list is proposed it still tends to be pitched at a level that can leave basic questions unanswered. The Joint Committee, for instance, proposes a list including ‘abuse, harassment or stirring up of violence or hatred based on the protected characteristics in the Equality Act 2010’, and “content or activity likely to cause harm amounting to significant psychological distress to a likely audience (defined in line with the Law Commission offence)”.

On that basis does blasphemy count as legal but harmful content? Does the Committee’s proposed list of specific harms answer that question? Some would certainly claim to suffer significant psychological distress from reading blasphemous material.  Religion or belief is a protected characteristic under the Equality Act. How would that be reconciled with the countervailing duty to take into account the importance of freedom of expression within the law or, as the Joint Committee would propose for high risk platforms, to assess the public interest in high value speech under the guidance of Ofcom?

If none of these provides a clear answer, the result is to delegate the decision-making to Ofcom. That prompts the question whether such a controversial decision as to what speech is or is not permissible online should be made, and made in clear terms, by Parliament.

While on the topic of delegation, let us address the proposition that the draft Bill’s ‘legal but harmful to adults’ duty delegates a state power to platforms. The Joint Committee report has an entire section entitled ‘Delegation of decision making’ ([165] to [169]).

At present, service providers have freedom to decide what legal content to allow or not on their platforms, and to make their own rules accordingly. That does not involve any delegation of state power, any more than Conway Hall exercises delegated state power when it decides on its venue hiring policy. Unless and until the state chooses to take a power via legislation, there is no state power capable of delegation.

Clause 11 (if we take at face value what the government says it is) requires platforms to provide users with information about certain of their decisions, and to enforce their rules consistently. Again, the state has not taken any power (either direct or via Ofcom) to instruct providers what rules to make. No state power, no delegation.

It is only when (as at least some Committees propose) the state takes a power to direct or govern decision-making that delegation is involved. Such a power would be delegated to Ofcom. Providers are then obligated to enforce the Bill’s and Ofcom’s rules against users. That involves providers in making decisions about what content contravenes the rules.  There is still no delegation of rule-making, except to the extent that latitude, vagueness or ambiguity in those rules results in de facto delegation of rule-making to the providers.

Current State of Play None of the Committees has accepted the submissions from a number of advocacy groups (and the previous Lords Committee Report on Freedom of Expression in the Digital Age) that ‘legal but harmful to adults’ obligations should be dropped from the legislation.

However, each Committee has put forward its own alternative formulation:

  • The Joint Committee’s list of reasonably foreseeable risks of harm that providers should be required to identify and mitigate (replacing the draft Bill’s transparency duty with a substantive mitigation duty) ([176]), as part of an overall package of recommended changes
  • The Petitions Committee’s recommendation that the primary legislation should contain as comprehensive an indication as possible of what content would be considered harmful to adults or children; and that abuse based on characteristics protected under the Equality Act and hate crime legislation should be designated as priority harmful content in the primary legislation. This Committee also considers that the legal but harmful duty should be a substantive mitigation duty. ([46], [67])
  • The DCMS Committee’s recommendation (similar to the Joint Committee) that the definition of (legal) content that is harmful to adults should be reframed to apply to reasonably foreseeable harms identified in risk assessments ([20]); This sits alongside a proposal that providers be positively required to balance their safety duties with freedom of expression ([19]); and that providers should be required to assess and take into account context, the position of the speaker, the susceptibility of the audience and the content’s accuracy. ([20]) This Committee appears also, at least implicitly, to support conversion into a substantive duty.

The DCMS Committee also recommends that the definition of legal content harmful to adults should: “explicitly include content that undermines, or risks undermining, the rights or reputation of others, national security, public order and public health or morals, as also established in international human rights law.”

On the face of it this is a strange proposal. The listed items are aims in pursuance of which, according to international human rights law, a state may if it so wishes restrict freedom of expression - subject to the restriction being prescribed by law (i.e. by clear and certain rules), necessary for the achievement of that aim, and proportionate.

The listed aims do not themselves form a set of clear and precise substantive rules, and are not converted into such by the device of adding ‘undermines, or risks undermining’. The result is a unfeasibly vague formulation. Moreover, it appears to suggest that every kind of speech that can legitimately be restricted under international human rights law, should be.  It is difficult to believe that the Committee really intends that. 

The various Committee proposals illustrate how firmly the draft Bill is trapped between the twin devils of over-removal via the blunt instrument of a content-oriented safety duty; and of loading onto intermediaries the obligation to make ever finer and more complex multi-factorial judgements about content. The third propounded alternative of safety by design has its own vice of potentially interfering with all content, good and bad alike.

Strand 3 - Reduce the discretion of large social media platforms to decide what content should and should not be on their services

Until very late in the consultation process the focus of the government’s Online Harms proposals was entirely on imposing duties on providers to prevent harm by their users, with the consequent potential for over-removal of user content mitigated to some degree by a duty to have regard to the importance of freedom of expression within the law. This kind of proposal sought to leverage the abilities of platforms to act against user content.

When the Full Response was published a new strand was evident: seeking to rein in the ability of large platforms to decide what content should and should not be present on their services. It is possible that this may have been prompted by events such as suspension of then President Trump’s Twitter account.

Be that as it may, the Full Response and now the draft Bill include provisions, applicable to Category 1 U2U providers, conferring special protections on journalistic content and content of democratic importance.  The most far-reaching protections relate to content of democratic importance. For such content the provider must not only ensure that it has systems and processes designed to ensure the importance of free expression of such content when making certain decisions (such as takedown, restriction or action against a user), but ensure that they apply in the same way to a diversity of political opinion. Whatever the merits and demerits of such proposals, they are far removed from the original policy goal of ensuring user safety.

Current state of play As noted above, the Joint Committee proposes that the journalistic content and content of democratic importance be replaced by a single statutory requirement to have proportionate systems and processes to protect ‘content where there are reasonable grounds to believe it will be in the public interest’ ([307]) The DCMS Committee recommendation on the scope of legal but harmful content recommends including democratic importance and journalistic nature when considering the context of content ([23]). 

Although the Committee’s discussion is about protecting ‘high value speech’, there is a risk involved in generalising this protection to the kind of single statutory safeguard for ‘content in the public interest’ envisaged by the Committee. The risk is that in practice the safeguard would be turned on its head – with the result that only limited categories of ‘high value speech’ would be seen as presumptively qualifying for protection from interference, leaving ‘low value’ speech to justify itself and in reality shorn of protection.

That is the error that Warby L.J. identified in Scottow, a prosecution under S.127 Communications Act 2003:

“The Crown evidently did not appreciate the need to justify the prosecution, but saw it as the defendant's task to press the free speech argument. The prosecution argument failed entirely to acknowledge the well-established proposition that free speech encompasses the right to offend, and indeed to abuse another. The Judge appears to have considered that a criminal conviction was merited for acts of unkindness, and calling others names, and that such acts could only be justified if they made a contribution to a "proper debate".  … It is not the law that individuals are only allowed to make personal remarks about others online if they do so as part of a "proper debate". 

In the political arena, the presumption that anything unpleasant or offensive is prima facie to be condemned can be a powerful one. The 10 December 2021 House of Lords debate on freedom of speech was packed with pleas to be nicer to each other online: hard to disagree with as a matter of etiquette. But if being unpleasant is thought of itself to create a presumption against freedom of expression, that does not reflect human rights law.

The risk of de facto reversal of the presumption in favour of protection of speech when we focus on protecting ‘high value’ speech is all the greater where platforms are expected to act in pursuance of their safety duty proactively, in near real-time and at scale, against a duty-triggering threshold of reasonable grounds to believe. 

That is without even considering the daunting prospect of an AI algorithm that claims to be capable of assessing the public interest.

Strand 4. Create new and reformed criminal offences that would apply directly to users

In parallel with the government’s proposals for an online duty of care, the Law Commission has been conducting two projects looking at the criminal law as it affects online and other communications: Modernising Communications Offences (Law Com No 399, 21 July 2021); Hate Crime Laws (LawCom No 402, 7 December 2021).

The communications offences report recommended:

  • A new harm-based communications offence to replace S.127(1) Communications Act 2003 and the Malicious Communications Act 1988
  • A new offence of encouraging or assisting serious self-harm
  • A new offence of cyberflashing; and
  • New offences of sending knowingly false, persistent or threatening communications, to replace S.127(2) Communications Act 2003

It also recommended that the government consider legislating to criminalise maliciously sending flashing images to known sufferers of epilepsy. It was not persuaded that specific offences of pile-on harassment or glorification of violent crime would be necessary, effective or desirable.

The hate crime report made a complex series of recommendations, including extending the existing ‘stirring up’ offences to cover hatred on grounds of sex or gender. It recommended that if the draft Online Safety Bill becomes law, inflammatory hate material should be included as ‘priority illegal content’ and the stirring up offences should not apply to social media companies and other platforms in respect of user to user content unless intent to stir up hatred on the part of the provider could be proved.

It also recommended that the government undertake a review of the need for a specific offence of public sexual harassment (covering both online and offline).

The government has said in an interim response to the communications offences report that it proposes to include three of the recommended offences in the Bill: the harm-based communications offence, the false communications offence and the threatening communications offence. The remainder are under consideration. The hate crime report awaits an interim response.

From the point of view of the safety duties under the Online Safety Bill, the key consequence of new offences is that the dividing line between the illegality duty and the ‘legal but harmful’ duties would shift. However, the ‘reasonable grounds to believe threshold would not change, and would apply to the new offences as it does to existing offences.

The Petitions Committee acknowledged concerns over how the proposed harm-based offence would intersect with the illegality duties:

“The Law Commission is right to recommend refocusing online communications offences onto the harm abusive  messages can  cause to victims. We welcome the Government’s commitment to adopt the proposed threatening and ‘harm-based’ communications offences. However, we also acknowledge the uncertainty and hesitation of some witnesses about how the new harm-based offence will be interpreted in practice, including the role of social media companies and other online platforms in identifying this content—as well as other witnesses’ desire for the law to deal with more cases of online abuse more strongly.”

It recommended monitoring the effectiveness of the offences and that the government should publish an initial review of the workings and impact of any new communications offences within the first two years after they come into force.

The Joint Committee supported the Law Commission recommendations. It also suggested that concerns about ambiguity and the context-dependent nature of the proposed harm-based offence could be addressed through the statutory public interest requirement discussed above. [135]


What is a duty of care?

In its proper legal sense a duty of care is a duty to take reasonable care to avoid injuring other people– that is why it is called a duty of care. It is not a duty to prevent other people breaking the law. Nor (other than exceptionally) is it a duty to prevent other people injuring each other. Still less is it a duty to prevent other people speaking harshly to each other.

A duty of care exists in the common law of negligence and occupier’s liability. Analogous duties exist in regulatory contexts such as health and safety law.  A duty of care does not, however, mean that everyone owes a duty to avoid causing any kind of harm to anyone else in any situation. Quite the reverse. The scope of a duty of care is limited by factors such as kinds of injury, causation, foreseeability and others.

In particular, for arms-length relationships such as property owner and visitor (the closest analogy to platform and user) the law carefully restricts safety-related duties of care to objectively ascertainable kinds of harm: physical injury and damage to property.

Objective injury v subjective harm  Once we move into subjective speech harms the law is loath to impose a duty. The UK Supreme Court held in Rhodes that the author of a book owes no duty to avoid causing distress to a potential reader of the book. It said:

“It is difficult to envisage any circumstances in which speech which is not deceptive, threatening or possibly abusive, could give rise to liability in tort for wilful infringement of another’s right to personal safety. The right to report the truth is justification in itself. That is not to say that the right of disclosure is absolute … . But there is no general law prohibiting the publication of facts which will cause distress to another, even if that is the person’s intention.” [77]

That is the case whether the author sells one book or a million, and whether the book languishes in obscurity or is advertised on the side of every bus and taxi.

The source of some of the draft Bill’s most serious problems lies in the attempt to wrench the concept of a safety-related duty of care out of its offline context – risk of physical injury - and apply it to the contested, subjectively perceived claims of harm that abound in the context of speech.

In short, speech is not a tripping hazard. Treating it as such propels us ultimately into the territory of claiming that speech is violence: a proposition that reduces freedom of expression to a self-cancelling right.

Speech is protected as a fundamental right. Some would say it is the right that underpins all other rights. It is precisely because speech is not violence that Berkeley students enjoy the right to display placards proclaiming that speech is violent. The state is – or should be - powerless to prevent them, however wrong-headed their message.

Quite how, on the nature of speech, a Conservative government has ended up standing shoulder to shoulder with those Berkeley students is one of the ineffable mysteries of politics. 

Causing v preventing Even where someone is under a duty to avoid causing physical injury to others, that does not generally include a duty to prevent them from injuring each other. Exceptionally, such a preventative duty can (but does not necessarily) arise, for instance where the occupier of property does something that creates a risk of that happening. Serving alcohol on the premises, or using property for a public golf course, would be an example. Absent that, or a legally close relationship (such as teacher-pupil) or an assumption of responsibility, there is no duty. Even less would any preventative duty exist for what visitors say to each other on the property.

The duty proposed to be imposed on UGC platforms is thus doubly removed from offline duties of care. First, it would extend far beyond physical injury into subjective harms. Second, the duty consists in the platform being required to prevent or restrict how users behave to each other.

It might be argued that some activities (around algorithms, perhaps) are liable to create risks that, by analogy with offline, could justify imposing a preventative duty. That at least would frame the debate around familiar principles, even if the kind of harm involved remained beyond bounds.

Had the online harms debate been conducted in those terms, the logical conclusion would be that platforms that do not do anything to create relevant risks should be excluded from scope. But that is not how it has proceeded. True, much of the political rhetoric has focused on Big Tech and Evil Algorithm. But the draft Bill goes much further than that. It assumes that merely facilitating individual public speech by providing an online platform, however basic that might be, is an inherently risk-creating activity that justifies imposition of a duty of care. That proposition upends the basis on which speech is protected as a fundamental right.

Safety by design It may be suggested that by designing in platform safety features from the start it is possible to reduce or eliminate risk, while avoiding the problems of detecting, identifying and moderating particular kinds of illegal or harmful content.

It is true that some kinds of safety feature – a reporting button, for instance – do not entail  any kind of content moderation. However, risk is not a self-contained concept. We always have to ask: “risk of what?” If the answer is “risk of people encountering illegal or harmful content”, at first sight that takes the platform back towards trying to distinguish permissible from impermissible content. However, that is not necessarily so.

A typical example of safety by design concerns amplification. It is suggested that platforms should be required to design in ‘friction’ features that inhibit sharing and re-sharing of content, especially at scale.

The problem with a content-agnostic approach such as this is that it inevitably strikes at all content alike (although it would no doubt be argued the overall impact of de-amplification is skewed towards ‘bad’ content since that is more likely to be shared and re-shared).

However, the content-agnostic position is rarely maintained rigorously, often reverting to discussion of ways of preventing amplification of illegal or harmful content (which takes us back to identifying and moderating such content). An example of this can be seen in Joint Committee recommendation 82(e):

“Risks created by virality and the frictionless sharing of content at scale, mitigated by measures to create friction, slow down sharing whilst viral content is moderated, require active moderation in groups over a certain size…”

Criticism of amplification is encapsulated in the slogan ‘freedom of speech is not freedom of reach’. As a matter of human rights law, however, interference with the reach of communications certainly engages the right of freedom of expression. As the Indian Supreme Court held in January 2020:

“There is no dispute that freedom of speech and expression includes the right to disseminate information to as wide a section of the population as is possible. The wider range of circulation of information or its greater impact cannot restrict the content of the right nor can it justify its denial."

Broadcast regulation The model adopted by the draft Bill is discretionary regulation by regulator, rather than regulation by the general law. Whether discretionary broadcast-style regulation is an appropriate model for individual speech is a debate in its own right.

[Grammatical correction 19 Feb 2022]