Tuesday, 11 February 2020

An Online Harms Compendium

Since June 2018 I have written a lot about the UK government's Online Harms proposals for a duty of care and an online regulator, the subsequent draft Online Safety Bill, the Bill itself in its numerous iterations and now the Online Safety Act 2023. 

This page is a rolling compendium of posts, now amounting to well over 100,000 words, organised with the most recent posts first (now up to August 2025). 

The posts aim to illuminate. That said, they are generally critical of the government's proposals and, now, the legislation.  

There is a recurring theme that the duties imposed by the Act are built on defectively designed foundations. What might an alternative approach look like? Section 9 of my submission to the White Paper consultation in 2019 has some ideas.

Also of interest may be my keynote address to the Society of Computers and Law Annual Conference on 2 October 2019, The internet: broken or about to be broken?.   

And just to show that there is nothing new under the sun, my January 2012 post on the inappropriateness of broadcast regulation for individual speech on the internet, including my April 1998 article on the topic in the Financial Times.


“Ofcom, it should be acknowledged, is to an extent caught between a rock and a hard place. It has to avoid being overly technology-prescriptive, while simultaneously ensuring that the effects of its recommendations are reasonably foreseeable to users and capable of being assessed for proportionality. Like much else in the Act, that may in reality be an impossible circle to square. That does not bode well for the Act’s human rights compatibility.”


"A takedown regime of this kind inevitably faces some similar issues to those that confronted the Online Safety Act, particularly in how to go about distinguishing illegal from legal content online. The Online Safety Act eventually included some fairly tortuous provisions that attempt (whether successfully or not) to meet those challenges. In contrast, the Policing Bill amendments maintain a judicious silence on some of the thorniest issues."


"Policymakers sometimes comfort themselves that if no-one is completely satisfied, they have probably got it about right. 

On that basis, Ofcom’s implementation of the Online Safety Act’s illegality duties must be near-perfection: the Secretary of State (DSIT) administering a sharp nudge with his draft Statement of Strategic Priorities, while simultaneously under fire for accepting Ofcom’s advice on categorisation of services; volunteer-led community forums threatening to close down in the face of perceived compliance burdens; and many of the Act’s cheerleaders complaining that Ofcom’s implementation has so far served up less substantial fare than they envisaged." 

4 December 2024 Safe speech by design

"[T]he OSA duties of care – and thus safety by design - go well beyond algorithmic social media curation, extending to (for instance) platforms that do no more than enable users to post to a plain vanilla discussion forum.

Consider the OSA safety duties concerning priority illegal content and priority offences.  What kind of feature would create or increase a risk of, for example, an online user deciding to offer boat trips across the Channel to aspiring illegal immigrants?

The further we move away from positive content-related functionality, the more difficult it becomes to envisage how safety by design grounded in the notion of specific risk-creating features and functions might map on to real-world technical features of online platforms."


"The next, and perhaps most interesting, document was the [Information Commissioner's Office’s] own submission to Ofcom’s Illegal Harms consultation, published on 1 March 2024. In this document the tensions between the OSA and data protection are most evident. In some areas the ICO overtly took issue with Ofcom’s approach."


"[Marieha] Hussain was prosecuted for an offence in a public street, to which the Online Safety Act would not directly apply. However, what if an image of the placard appeared online? If displaying the placard in the street was sufficient to attract a criminal prosecution, even if ultimately unsuccessful, could the OSA (had it been in force) have required a platform to take action over an image of the placard displayed online?" 


"One of the most perplexing aspects of the OSA has always been how an automated system, operating in real time on limited available information, can make accurate judgements about illegality or apply the methodology laid down in S.192 [of the Act]: such as determining whether it has reasonable grounds to make inferences about the existence of facts or the state of mind of users.

Undaunted, s.192 contemplates that illegality judgments may be fully automated: “... whether a judgement is made by human moderators, by means of automated systems or processes or by means of automated systems or processes together with human moderators." "


"The OSA regime places legal obligations on intermediary service providers. The steps that they take to comply with those obligations have the potential to affect users' rights, particularly freedom of expression. Foreseeability requires that a user should be able to predict, with reasonable certainty, whether their contemplated online post is liable to be affected by actions taken by a service provider in discharging its obligations under the Act. The safeguards stipulated by Ofcom should therefore provide the requisite degree of predictability for users in respect of blocking and removal actions to be taken by service providers when carrying out Ofcom's recommended measures."


"The purpose of at least the operational content-based aspects of the OSA regime is to harness the control that intermediary service providers can exercise over their users in order (indirectly) to regulate individual items of user content. The fact that a service provider may not be penalised for an individual misjudgement does not alter the fact that the service provider has to have a system in place to make judgements and that the user in question stands to be affected when their individual post is removed as a result of a misjudgement. The operational content-based duties exist and are about regulating individual content."


"The very title of the Ofcom consultation — Illegal Harms — prompts questions about the illegality duties. Are they about illegal content? Are they about harm? Is all illegal content necessarily harmful? What does the Act mean by harm?"


"Politicians may have convinced themselves that the legislation is all about big US social media companies and their algorithms. Ofcom has drawn the short straw of implementing the Act as it actually is: covering (according to the Impact Assessment) an estimated 25,000 UK businesses alone, 80% of which are micro-businesses (fewer than 10 employees)."


"The risk of incompatibility with fundamental rights is in fact twofold – first, built-in arbitrariness breaches the ‘prescribed by law’ or ‘legality’ requirement: that the user should be able to foresee, with reasonable certainty, whether what they are about to post is liable to be affected by the platform’s performance of its duty; and second, built-in over-removal raises the spectre of disproportionate interference with the right of freedom of expression."


"Am I proportionate? ... [M]y reading of the Bill fills me with doubt. It requires me to act in ways that will inevitably lead to over-blocking and over-removal of your legal content. Can that be proportionate?

Paradoxically, the task for which it is least feasible to involve human moderators and when I am most likely to be asked to work alone – real time or near-real time blocking and filtering - is exactly that in which, through having to operate in a relative vacuum of contextual information, I will be most prone to make arbitrary judgements.
... 
Combine all these elements and the result is that I am required to remove legal content at scale. The Bill talks about proportionate systems and processes, yet it expressly requires me to act in a way that on the face of it looks disproportionate. Moreover, I am to make these judgments simultaneously for dozens of priority offences, plus their inchoate counterparts. This poses a truly existential challenge for an AI moderator such as myself."


"The first time that I blogged about this subject was in June 2018. Now, 29 blogposts, four evidence submissions and over 100,000 words later, is there anything left worth saying about the Bill? That rather depends on what the House of Lords does with it. Further government amendments are promised, never mind the possibility that some opposition or back-bench amendments may pass.

In the meantime, endeavouring to strike an optimal balance of historical perspective and current relevance, I have pasted together a thematically arranged collection of snippets from previous posts, plus a few tweets thrown in for good measure."


"In a few months’ time three years will have passed since the French Constitutional Council struck down the core provisions of the Loi Avia - France’s equivalent of the German NetzDG law – for incompatibility with fundamental rights. Although the controversy over the Loi Avia has passed into internet history, the Constitutional Council's decision provides some instructive comparisons when we examine the UK’s Online Safety Bill.

As the Bill awaits its House of Lords Committee debates, this is an opportune moment to cast our minds back to the Loi Avia decision and see what lessons it may hold. Caution is necessary in extrapolating from judgments on fundamental rights, since they are highly fact-specific; and when they do lay down principles they tend to leave cavernous room for future interpretation. Nevertheless, the Loi Avia decision makes uncomfortable reading for some core aspects of the Online Safety Bill."


"If anything graphically illustrates the perilous waters into which we venture when we require online intermediaries to pass judgment on the legality of user-generated content, it is the government’s decision to add S.24 of the Immigration Act 1971 to the Online Safety Bill’s list of “priority illegal content”: user content that platforms must detect and remove proactively, not just by reacting to notifications. ...

...it is not so far-fetched a notion that an online platform, tasked by the Online Safety Bill proactively to detect and remove user content that encourages an illegal entry offence, might consider itself duty-bound to remove content that in actual fact would not result in prosecution or a conviction in court. There are specific reasons for this under the Bill, which contrast with prosecution through the courts."

6 January 2023 Twenty questions about the Online Safety Bill Questions submitted to the Culture Secretary's New Year public Q&A. 

13 December 2022 (Some of) what is legal offline is illegal online Discussion of the "What is Legal Offline is Illegal Online" narrative.

"Now, extolling its newly revised Bill, the government has reverted to simplicity. DCMS’s social media infographics once more proclaim that ‘What is illegal offline is illegal online’.

The underlying message of the slogan is that the Bill brings online and offline legality into alignment. Would that also mean that what is legal offline is (or should be) legal online?  The newest Culture Secretary Michelle Donelan appeared to endorse that when announcing the abandonment of ‘legal but harmful to adults’: "However admirable the goal, I do not believe that it is morally right to censor speech online that is legal to say in person." 

Commendable sentiments, but does the Bill live up to them? Or does it go further and make illegal online some of what is legal offline? I suggest that in several respects it does do that."


"With the Online Safety Bill returning to the Commons next month, this is an opportune moment to refresh our knowledge of the Bill.  The labels on the tin hardly require repeating: children, harm, tech giants, algorithms, trolls, abuse and the rest. But, to beat a well-worn drum, what really matters is what is inside the tin. 

Below is a miscellany of statements about the Bill: familiar slogans and narratives, a few random assertions, some that I have dreamed up to tease out lesser-known features. True, false, half true, indeterminate?" 

18 August 2022 Reimagining the Online Safety Bill Reflections on the Bill during the hiatus caused by the Conservative leadership election. 

"The Bill has the feel of a social architect’s dream house: an elaborately designed, exquisitely detailed (eventually), expensively constructed but ultimately uninhabitable showpiece; a showpiece, moreover, erected on an empty foundation: the notion that a legal duty of care can sensibly be extended beyond risk of physical injury to subjectively perceived speech harms.

As such, it would not be surprising if, as the Bill proceeded, implementation were to recede ever more tantalisingly out of reach. As the absence of foundations becomes increasingly exposed, the Bill may be in danger not just of delay but of collapsing into the hollow pit beneath, leaving behind a smoking heap of internal contradictions and unsustainable offline analogies."

30 July 2022 Platforms adjudging illegality – the Online Safety Bill’s inference engine Discussion of New Clause 14 setting out how platforms should assess illegality of user content.

"One underlying issue is that (especially for real-time proactive filtering) providers are placed in the position of having to make illegality decisions on the basis of a relative paucity of information, often using automated technology. That tends to lead to arbitrary decision-making.

Moreover, if the threshold for determining illegality is set low, large scale over-removal of legal content will be baked into providers’ removal obligations. But if the threshold is set high enough to avoid over-removal, much actually illegal content may escape. Such are the perils of requiring online intermediaries to act as detective, judge and bailiff."


"... the illegality duties under Clause 9 could be said to embody a ‘most restrictive common denominator’ approach to differences between criminal offences within the UK."


"Depending on its interpretation the Bill appears either:

6.21.1 to exclude from consideration essential ingredients of the relevant criminal offences, thereby broadening the offences to the point of arbitrariness and/or disproportionate interference with legitimate content; or

6.21.2 to require arbitrary assumptions to be made about those essential ingredients, with similar consequences for legitimate content; or

6.21.3 to require the existence of those ingredients to be adjudged, in circumstances where extrinsic factual material pertaining to those ingredients cannot be available to a filtering system.

In each case the result is arbitrariness (or impossibility), significant collateral damage to legal content, or both. It is not easy to see how on any of those interpretations the Clause 9(3) proactive filtering obligation could comply with either the prescribed by law requirement or the proportionality requirement." 

27 March 2022 Mapping the Online Safety Bill Analysis of the Bill as introduced into Parliament, with flowcharts.

"Some time ago I ventured that if the road to hell was paved with good intentions, this was a motorway. The government continues to speed along the duty of care highway.

It may seem like overwrought hyperbole to suggest that the Bill lays waste to several hundred years of fundamental procedural protections for speech. But consider that the presumption against prior restraint appeared in Blackstone’s Commentaries (1769). It endures today in human rights law. That presumption is overturned by legal duties that require proactive monitoring and removal before an independent tribunal has made any determination of illegality.

It is not an answer to say, as the government is inclined to do, that the duties imposed on providers are about systems and processes rather than individual items of content. For the user whose tweet or post is removed, flagged, labelled, throttled, capped or otherwise interfered with as a result of a duty imposed by this legislation, it is only ever about individual items of content."

19 February 2022 Harm Version 4.0 - The Online Harms Bill in metamorphosis Analysis of various Committee recommendations.

"Overall, the government has pursued its quest for online safety under the Duty of Care banner, bolstered with the slogan “What Is Illegal Offline Is Illegal Online”.

That slogan, to be blunt, has no relevance to the draft Bill. Thirty years ago there may have been laws that referred to paper, post, or in some other way excluded electronic communication and online activity. Those gaps were plugged long ago. With the exception of election material imprints (a gap that is being fixed by a different Bill currently going through Parliament), there are no criminal offences that do not already apply online (other than jokey examples like driving a car without a licence).

On the contrary, the draft Bill’s Duty of Care would create novel obligations for both illegal and legal content that have no comparable counterpart offline. The arguments for these duties rest in reality on the premise that the internet and social media are different from offline, not that we are trying to achieve offline-online equivalence."


22 November 2021 Licence to chill The Law Commission's proposed harmful communications offence.

"Now that the government has indicated that it is minded to accept the Law Commission’s recommendations [for reform of the communications offences], a closer – even if 11th hour - look is called for: doubly so, since under the proposed Online Safety Bill a service provider would be obliged to take steps to remove user content if it has “reasonable grounds to believe” that the content is illegal. The two provisions would thus work hand in glove. There is no doubt that S.127 [Communications Act 2003], at any rate, is in need of reform. The question is whether the proposed replacement is an improvement. Unfortunately, that closer look suggests that the Law Commission’s recommended harm-based offence has significant problems. These arise in particular for a public post to a general audience." 


"Notwithstanding its abstract framing, the impact of the draft Bill (should it become law) would be on individual items of content posted by users. But how can we evaluate that impact where legislation is calculatedly abstract, and before any of the detail is painted in? We have to concretise the draft Bill’s abstractions: test them against a hypothetical scenario and deduce (if we can) what might result. This post is an attempt to do that."


"Even a wholly systemic duty of care has, at some level and at some point – unless everything done pursuant to the duty is to apply indiscriminately to all kinds of content - to become focused on which kinds of user content are and are not considered to be harmful by reason of their informational content, and to what degree."


"The draft Bill's attempt to convert subjective perception of content into an objective standard illustrates just how difficult it is to apply concepts of injury and harm to speech. The cascading levels of definition, ending up with a provision that appears to give precedence to an individual’s subjective claim to significant adverse psychological impact, will bear close scrutiny – not only in their own right, but as to how a service provider is meant to go about complying with them."

22 June 2021 Speech vs. Speech

"Can something that I write in this blog restrict someone else’s freedom of expression? According to the UK government, yes. In its Full Response to the Online Harms White Paper the government suggested that under the proposed legislation user redress mechanisms to be provided by platforms would enable users to “challenge content that unduly restricts their freedom of expression”. ...

It is difficult to make sense of appeals to freedom of expression as a fundamental right without appreciating the range of different usages and their, to some degree, contradictory underpinnings. When the same label is used to describe a right to be protected against coercive state action, a right whose existence is predicated on coercive state action, and everything in between, the prospects of conducting a debate on common ground are not good.

Prompted by the existence of the Lords Communications and Digital Committee Inquiry into Freedom of Expression Online, this piece aims – without any great expectation of success - to dispel some of the fog."


"The government’s draft Online Safety Bill announcement claimed that the measures required of ordinary and large providers would “remove the risk that online companies adopt restrictive measures or over-remove content in their efforts to meet their new online safety duties.” (emphasis added)

This bold statement – in contrast with the more modest claim in the Impact Assessment - shows every sign of being another unfulfillable promise, whether for news publisher content or user-generated content generally.

Lord Black said in the Lords debate:
“We have the opportunity with this legislation to lead the world in ensuring proper regulation of news content on the internet, and to show how that can be reconciled with protecting free speech and freedom of expression. It is an opportunity we should seize.”
It can be no real surprise that a solution to squaring that circle is as elusive now as when the Secretary of State wrote to the Society of Editors two years ago. It has every prospect of remaining so."


"Internal conflicts between duties, underpinned by the Version 3.0 approach to the notion of harm, sit at the heart of the draft Bill. For that reason, despite the government’s protestations to the contrary, the draft Bill will inevitably continue to attract criticism as - to use the Secretary of State's words - a censor’s charter."

17 December 2020 The Online Harms edifice takes shape Analysing the government's Final response to consultation on the White Paper.

"These are difficult issues that go to the heart of any proposal to impose a duty of care. They ought to have been the subject of debate over the last couple of years. Unfortunately they have been buried in the rush to include every conceivable kind of harm - however unsuited it might be to the legal instrument of a duty of care - and in discussions of ‘systemic’ duties of care abstracted from consideration of what should and should not amount to harm.

It should be no surprise if the government’s proposals became bogged down in a quagmire resulting from the attempt to institute a universal law of everything, amounting to little more than a vague precept not to behave badly online. The White Paper proposals were a castle built on quicksand, if not thin air.

The proposed general definition of harm, while not perfect, gives some shape to the edifice. It at least sets the stage for a proper debate on the limits of a duty of care, the legally protectable nature of personal safety online, and its relationship to freedom of speech – even if that should have taken place two years ago. Whether regulation by regulator is the appropriate way to supervise and police an appropriately drawn duty of care in relation to individual speech is another matter."


"The question for an intermediary subject to a legal duty of care will be: “are we obliged to 
consider taking steps (and if so what steps) in respect of these words, or this image, in this context?”

If we are to gain an understanding of where the lines would be drawn, we cannot shelter behind comfortable abstractions. We have to grasp the nettle of concrete examples, however uncomfortable that may be."

24 June 2020 Online Harms Revisited Based on a panel presentation to the Westminster e-Forum event on Online Regulation on 23 June 2020.

"Heading down the “law of everything” road was always going to land the government in the morass of complexity and arbitrariness in which it now finds itself. One of the core precepts of the White Paper is imposing safety by design obligations on intermediaries. But if anything is unsafe by design, it is this legislation."

20 June 2020 Online Harms and the Legality Principle

"The government would no doubt be tempted to address [legality] issues by including statutory obligations on the regulator, for instance to have regard to the fundamental right of freedom of expression and to act proportionately. That may be better than nothing. But can a congenital defect in legislation really be cured by the statutory equivalent of exhorting the patient to get better?" 

24 May 2020 A Tale of Two Committees

"Two Commons Committees –the Home Affairs Committee and the Digital, Culture, Media and Sport Committee – have recently held evidence sessions with government Ministers discussing, among other things, the government’s proposed Online Harms legislation. These sessions proved to be as revealing, if not more so, about the government’s intentions as its February 2020 Initial Response to the White Paper."

12 May 2020 Disinformation and Online Harms Presentation to CIGR Annual Conference, 1 May 2020. 

"If you can't articulate a clear and certain rule about speech, you don't get to make a rule at all."   

16 February 2020 Online Harms Deconstructed - the Initial Consultation Response 

"Preliminaries aside, what is the current state of the government’s thinking? ... This post takes a microscope to some of the main topics, comparing the White Paper text with that in the Response."

2 February 2020 Online Harms IFAQ. Insufficiently Frequently Asked Questions: a grand tour of the landscape, taking in a digital Lord Chamberlain, harm reduction cycles, systemic versus content regulation, fundamental rights, the rule of law and more.

“The particular vice at the heart of the White Paper is the latitude for the regulator to deem things to be harmful. If the proposals were only about safety properly so-called, such as risk of death and personal injury, that would correspond to offline duties of care and draw a clear line. Or if the proposals were only about unlawful online behaviour, the built-in line between lawful and unlawful would provide some protection against overreach. But the proposals are neither, and they do not.”

28 June 2019 Speech is not a tripping hazard My detailed submission to the government consultation on the White Paper.

"The duty of care would trump existing legislation. The Rhodes case study illustrates the extent to which the proposed duty of care would, to all intents and purposes, set up a parallel legal regime controlling speech online, comprising rules devised by the proposed regulator under the umbrella of a general rubric of harm. This parallel regime would in practice take precedence over the body of legislation in the statute book and common law that has been carefully crafted to address the boundaries of a wide variety of different kinds of speech."


5 May 2019 The Rule of Law and the Online Harms White Paper Analysing the White Paper against the Ten Point Rule of Law Test proposed in March.  

“The idea of  the test is less to evaluate the substantive merits of the government’s proposal … but more to determine whether it would satisfy fundamental rule of law requirements of certainty and precision, without which something that purports to be law descends into ad hoc command by a state official.”

18 April 2019 Users Behaving Badly – the Online Harms White Paper Analysing the White Paper.

“Whilst framed as regulation of tech companies, the White Paper’s target is the activities and communications of online users. “Ofweb” would regulate social media and internet users at one remove. It would be an online sheriff armed with the power to decide and police, via its online intermediary deputies, what users can and cannot say online. … This is a mechanism for control of individual speech such as would not be contemplated offline and is fundamentally unsuited to what individuals do and say online.”


“...the regulator is not an alchemist. It may be able to produce ad hoc and subjective applications of vague precepts, and even to frame them as rules, but the moving hand of the regulator cannot transmute base metal into gold. Its very raison d'etre is flexibility, discretionary power and nimbleness. Those are a vice, not a virtue, where the rule of law is concerned, particularly when freedom of individual speech is at stake. … Close scrutiny of any proposed social media duty of care from a rule of law perspective can help ensure that we make good law for bad people rather than bad law for good people.”

19 October 2018 Take care with that social media duty of care Analysing why offline duties of care do not transpose into online speech regulation. 

“The law does not impose a universally applicable duty of care to take steps to prevent or reduce any kind of foreseeable harm that visitors may cause to each other; certainly not when the harm is said to have been inflicted by words rather than by a knife, a flying lump of concrete or an errant golf ball.”

7 October 2018 A Lord Chamberlain for the internet? Thanks, but no thanks ‘Regulation by regulator’ is a bad model for online speech. 

“If we want safety, we should look to the general law to keep us safe. Safe from the unlawful things that people do offline and online. And safe from a Lord Chamberlain of the Internet.”

5 June 2018 Regulating the internet – intermediaries to perpetrators The fallacy of the unregulated internet. 

“The choice is not between regulating or not regulating.  If there is a binary choice (and there are often many shades in between) it is between settled laws of general application and fluctuating rules devised and applied by administrative agencies or regulatory bodies; it is between laws that expose particular activities, such as search or hosting, to greater or less liability; or laws that visit them with more or less onerous obligations; it is between regimes that pay more or less regard to fundamental rights; and it is between prioritising perpetrators or intermediaries. … We would at our peril confer the title and powers of Governor of the Internet on a politician, civil servant, government agency or regulator.”

[Updated 22 February 2020 to add 'Online Harms Deconstructed', 28 May 2020 to add 'A Tale of Two Committees', 22 June 2020 to add 'Online Harms and the Legality Principle', 11 July 2020 to add CIGR presentation and 'Online Harms Revisited'; 15 Dec 2020 to add Ofcom submission; 19 December 2020 to add 'The Online Harms edifice takes shape'; 17 May 2021 to add "Harm Version 3.0: the draft Online Safety Bill"; 11 July 2021 to add 'Carved out or carved up? The draft Online Safety Bill and the press' and 'On the trail of the Person of Ordinary Sensibilities'; 4 February 2022 to add four further posts; 19 February 2022 to add 'Harm Version 4.0.'; 30 July 2022 to add 'Mapping the Online Safety Bill' and two evidence submissions to the Public Bill Committee; 31 July 2022 to add 'Platforms adjudging illegality – the Online Safety Bill’s inference engine'; 2 November 2022 to add 'Reimagining the Online Safety Bill'; 27 November 2022 to add 'How well do you know the Online Safety Bill?'; 6 January 2023 to re-order; add some descriptions and improve formatting; and add '(Some of) what is legal offline is illegal online' and 'Twenty questions about the Online Safety Bill'; 8 April 2023 to add 'Positive light or fog in the Channel?' and 'Five lessons from the Loi Avia'; 15 July 2023 to add 'The Pocket Online Safety Bill', 'Knowing the unknowable: musings of an AI content moderator; and 'Shifting paradigms in platform regulation'; May 2025 to add 9 further posts up to 'The Online Safety Act grumbles on'; 8 June 2025: redrafted introduction; added 'Knives out for knives'; 5 August 2025, to add post ‘Ofcom’s proactive technology measures: principles-based or vague?’.] 



Sunday, 2 February 2020

Online Harms IFAQ*


*Insufficiently Frequently Asked Questions

Eager Student has some questions for Scholarly Lawyer about the UK government’s Online Harms White Paper.

ES. Why are people so perturbed about the proposed Online Harms legislation?

SL. Most people aren’t, even if they have heard of it. But they should be.

ES. What can be wrong with preventing harm?

SL. Indeed, who could be against that? But this is a proposal for legislation. We have to peel off the label and check what is in the bottle.  When we look inside, we find a wormhole back to a darker age.

ES. A darker age?

SL. Consider the days when unregulated theatres were reckoned to be a danger to society and the Lord Chamberlain censored plays. That power was abolished in 1968, to great rejoicing. The theatres were liberated. They could be as rude and controversial as they liked, short of provoking a breach of the peace.

The White Paper proposes a Lord Chamberlain for the internet. Granted, it would be an independent regulator, similar to Ofcom, not a royal official. It might even be Ofcom itself. But the essence is the same.  And this time the target would not be a handful of playwrights out to shock and offend, but all of us who use the internet.

ES. This is melodramatic, surely. How could a digital Lord Chamberlain censor the tens of millions of us who use the internet and social media?

SL. Would the regulator personally strike a blue pencil through specific items of content? No. But the legislation is no less censorious for that. It would co-opt internet platforms and search engines to do the work, under the banner of a duty of care articulated and enforced by the regulator.

ES. Doesn’t the internet route around censorship?

SL. Once your post or tweet has passed through your ISP’s system it is on the internet, chopped up into fragments and bouncing around an interconnected mesh of routers: very difficult to isolate and capture or block.

But your own ISP has always been a choke point. It can be turned into a gatekeeper instead of a gateway. That danger was recognised early on when Article 15 of the ECommerce Directive prevented EU governments from doing that.

Nowadays we communicate via large platforms: social media companies, messaging apps and the rest. Our tweets, posts and instant messages end up in a few databases. So we have new choke points.  Companies may be able to block, filter and take down users’ content on their platforms, even if broad brush rather than blue pencil. Our digital Lord Chamberlain would commandeer that capability at one remove,

ES. Even so, where is the censorship? This is about online safety, not suppressing controversy.

SL. The particular vice at the heart of the White Paper is the latitude for the regulator to deem things to be harmful. If the proposals were only about safety properly so-called, such as risk of death and personal injury, that would correspond to offline duties of care and draw a clear line. Or if the proposals were only about unlawful online behaviour, the built-in line between lawful and unlawful would provide some protection against overreach. But the proposals are neither, and they do not.

ES. If something is not presently unlawful, would the White Paper proposals make it so?

SL. Yes and no. Asserting that you have been harmed by reading something that I have posted online is not a basis for a legal claim against me, even if you are shocked, outraged, offended or distressed by it. There would have to be more, such as incitement or harassment. That is the same rule as for publishing offline.  

But the White Paper’s duty of care could require a platform to take action against that same post, regardless of its lawfulness. It might extend to blocking, filtering or removing. The platform could be fined if it didn’t have good enough systems for reducing the risk of harm that the regulator deemed to exist from that kind of content.

ES. So the platform could face legal consequences for failing to censor my material, even though I have done nothing unlawful by writing and posting it?

SL. Exactly so.

ES. I’m curious as to why would anyone would set up a new regime to address behaviour that is already unlawful.

SL. To create a new law enforcement route. A digital Lord Chamberlain would co-opt online intermediaries to the task: mixing our metaphors horribly, an online sheriff riding out with a posse of conscripted deputies to bring order to Digital Dodge. Whether that is a good way of going about things is a matter of highly contested opinion.

But the genesis of the White Paper was in something quite different from unlawful behaviour: lawful online content and behaviour said to pose a risk of harm, whether to an individual user or to society.

ES. Is it really right that the regulator could treat almost anything as harmful?

SL. The White Paper proposes to impose a duty of care on online intermediaries in respect of harm. It offers no definition of harm, nor what constitutes risk of harm. Apart from a few specific exclusions, that would be open to interpretation by the new online regulator.

ES. A duty of care. Like health and safety?

SL. The name is similar, but the White Paper’s duty of care bears scant resemblance to comparable offline duties of care.

ES. How does it differ?

SL. In two main ways. First, corresponding duties of care that have been developed in the offline world are limited to safety in the acknowledged sense of the word: risk of death, personal injury and damage to physical property.  Those are objective concepts which contribute to keeping duties of care within ascertainable and just limits. Universal, generalised duties of care do not exist offline.  

The White Paper duty of care is not limited in that way. It speaks only of harm, undefined. Harm is a broad concept. Harm and safety become fluid, subjective notions when applied to speech. We can find ourselves asserting that speech is violence and that we are entitled to be kept safe not just from threats and intimidation, but from opinions and facts that we find disagreeable.

The online regulator could adopt the standard of the most easily offended reader. It could re-invent blasphemy under the umbrella of the duty of care. It could treat the internet like a daytime TV show. Ofcom, let us not forget, suggested to survey participants in 2018 that bad language is a “harmful thing”. In last year’s survey it described “offensive language” as a “potential harm”.

Undefined harm, when applied to speech, is not just in the eye and ear of the beholder but in the opinion of the regulator.  The duty, and thus the regulator’s powers, would deliberately go far wider than anything that is contrary to the law.

ES. So even if our digital Lord Chamberlain is not censoring specific items, it still gets to decide what kind of thing is and is not harmful?

SL. Yes.  Writing a Code of Practice can be as censorious as wielding a blue pencil.

ES. And the second difference from an offline duty of care?

SL. Health and safety is about injury caused by the occupier or employer: tripping over a loose floorboard, for instance.  A duty of care would not normally extend to injury inflicted by one visitor on another. It could do if the occupier or employer created or exacerbated the specific risk of that happening, or assumed responsibility for the risk.

But least of all is there an offline duty of care in respect of what one visitor says to another. That is exactly the kind of duty that the White Paper proposals would impose on online intermediaries.

The White Paper is twice removed from any comparable offline duty of care.

ES. Could they not define a limited version of harm?

SL.  John Humphrys suggested that on the Today programme back in April last year. The New Zealand Harmful Digital Communications Act passed in 2015 did define harm: “serious emotional distress”. That has the merit of focusing on a significant aspect of the mischief that is so often associated with social media: bullying, intimidation, abuse and the rest. But that would be a small sliver of the landscape covered by the White Paper.

Recently one of the progenitors of the plan for a duty of care, the Carnegie UK Institute, published its own suggested draft Bill. It does not define or limit the concept of harm.

ES. Is there a disadvantage in defining harm?

SL. That depends on your perspective. A clear, objective boundary for harm would inevitably shrink the regulator’s remit and limit its discretion. It would have to act within the boundaries of a set of rules. So if your aim is to maximise the power and flexibility of the regulator, then you don’t want to cramp its style by setting clear limits. From this viewpoint vagueness is a virtue.

Opinions will differ on whether that is a good or a bad thing. Traditionally the consensus has been that individual speech should be governed by rules, not discretion. Discretion, it might be thought, is a defining characteristic of censorship.

The very breadth of the territory covered by the White Paper and its predecessor Internet Safety Green Paper may explain why the objection to undefined harm is so steadfastly ignored. A catch-all is no longer a catch-all once you spell out what it catches.

ES. You say this is about individual speech, but isn’t the point to regulate the platforms?

SL. It is true that the intermediaries would be the direct subject of the duty of care. They would suffer the penalties if they breached the duty. But we users are the ones who would suffer the effects of our online speech being deemed harmful. Any steps that the platforms were obliged to take would be aimed at what we do and say online. Users are the ultimate, if indirect, target.

ES. I have seen it said that the duty of care should focus on platforms’ processes rather than on specific user content. How might that work?

SL. Carnegie has criticised the White Paper as being overly focused on types of content. It says the White Paper opened up the government to “(legitimate) criticism from free speech campaigners and other groups that this is a regime about moderation, censorship and takedown”. Instead the regime should be about designing services that “hold in balance the rights of all users, reduce the risk of reasonably foreseeable harms to individuals and mitigate the cumulative impact on society.”

To this end Carnegie proposes a ‘systemic’ approach: “cross-cutting codes which focus on process and the routes to likely harm”. For the most part its draft Bill categorises required Codes of Practice not by kinds of problematic speech (misinformation and so on) but in terms of risk assessment, risk factors in service design, discovery and navigation procedures, how users can protect themselves from harm, and transparency. There would be specific requirements on operators to carry out risk assessments, risk minimisation measures, testing and to be transparent.

Carnegie has said: “The regulatory emphasis would be on what is a reasonable response to risk, taken at a general level. In this, formal risk assessments constitute part of the harm reduction cycle; the appropriateness of responses should be measured by the regulator against this.”

ES. Would a systemic approach make a difference?

SL. The idea is that a systemic approach would focus on process design and distance the regulator from judgements about content. The regulator should not, say Carnegie, be taking decisions about user complaints in individual cases.

But risk is not a self-contained concept. We have to ask: ‘risk of what?’ If the only answer provided is ‘harm’, and harm is left undefined, we are back to someone having to decide what counts as harmful. Only then can we measure risk. How can we do that without considering types of content? How can the regulator measure the effectiveness of intermediaries’ harm reduction measures without knowing what kind of content is harmful?

What Carnegie has previously said about measuring harm reduction suggests that the regulator would indeed have to decide which types of content are to be regarded as harmful: “…what is measured is the incidence of artefacts that – according to the code drawn up by the regulator – are deemed as likely to be harmful...”. By “artefacts” Carnegie means “types of content, aspects of the system (e.g. the way the recommender algorithm works) and any other factors”. (emphases added)

ES: What is the harm reduction cycle?

SL: This is the core of the Carnegie model. It envisages a repeated process of adjusting the design of intermediaries’ processes to squeeze harm out of the system. 

Carnegie has said: “Everything that happens on a social media or messaging service is a result of corporate decisions: …” From this perspective user behaviour is a function of the intermediaries’ processes - processes which can be harnessed by means of the duty of care to influence, nudge – or even fundamentally change - user behaviour.

Carnegie talk of a “Virtuous circle of harm reduction on social media and other internet platforms. Repeat this cycle in perpetuity or until behaviours have fundamentally changed and harm is designed out.”

There is obviously a question lurking there about the degree to which users control their own behaviour, versus the degree to which they are passive instruments of the platforms that they use to communicate.

ES: I sense an incipient rant.

SL: Mmm. The harm reduction cycle does prompt a vision of social media users as a load of clothes in a washing machine: select the programme marked harm reduction, keep cycling the process, rinse and repeat until harm is cleaned out of the system and a sparkling new internet emerges.

But really, internet users are not a bundle of soiled clothes to be fed into a regulator’s programmed cleaning cycle; and their words are not dirty water to be pumped out through the waste hose.

Internet users are human beings who make decisions – good, bad and indifferent - about what to say and do online; they are autonomous, not automatons. It verges on an affront to human dignity to design regulation as if people are not the cussed independent creatures that we know they are, but human clay to be remoulded by the regulator’s iterative harm reduction algorithm. We did not free ourselves from technology Utopianism only to layer technocratic policy Utopianism on top of it.

ES. Can system be divorced from content?

SL. It is difficult to see how, say, a recommender algorithm can be divorced from the kind of content recommended, unless we think of recommending content as likely to be harmful per se. Should we view echo chambers as intrinsically harmful regardless of the content involved? Some may take that view, perhaps the future regulator among them. Whether it is appropriate for a regulator to be empowered to make that kind of judgement is another matter.

Realistically, a duty of care can hardly avoid being about - at least for the most part - what steps the intermediary has to take in respect of what kinds of content. The blue pencil - even in the guise of a broad brush wielded by intermediaries according to Codes of Practice arranged by process – would rest in the ultimate hand of our digital Lord Chamberlain.

ES. Surely there will be some limits on what the regulator can do?

SL. The White Paper excluded some harms, including those suffered by organisations.  The Carnegie draft Bill does not. It does contain a hard brake on the regulator’s power: it must not require service providers to carry out general monitoring. That is intended to comply with Article 15 of the Electronic Commerce Directive.

The government has also said there is no intention to include the press. The Carnegie draft Bill attempts a translation of that into statutory wording.

ES. Are there any other brakes?

SL. The regulator will of course be bound by human rights principles. Those are soft rather than hard brakes: less in the nature of red lines, more a series of hurdles to be surmounted in order to justify the interference with the right – if the right is engaged in the first place.

The fact that a right is engaged does not mean, under European human rights law, that the interference is always a violation. It does mean that the interference has to be prescribed by law and justified as necessary and proportionate.

ES.  Would a duty of care engage freedom of expression?

SL. The speech of individual end users is liable to be suppressed, inhibited or interfered with as a consequence of the duty of care placed on intermediaries. Since the intermediaries would be acting under legal compulsion (even if they have some choice about how to comply), such interference is the result of state action. That should engage the freedom of speech rights of end users. As individual end users’ speech is affected it doesn’t matter whether corporate intermediaries themselves have freedom of expression rights.

The right is engaged whether the interference is at the stage of content creation, dissemination, user engagement or moderation/deletion. These are the four points that Carnegie identifies in a paper that discusses the compatibility of its draft Bill with fundamental freedoms.

Carnegie contemplates that tools made available to users, such as image manipulation and filters, could be in scope of the duty of care. It suggests that it is at least arguable that state control over access to creation controls (paint, video cameras and such like) could be seen as an interference. Similarly for a prohibition on deepfake tools, if they were held to be problematic.  

Carnegie suggests that it is a difficult question whether, if we have been nudged in one direction, it is an interference to counter-nudge. However, the question is surely not whether counter-nudging required under a duty of care is an interference that engages the right of freedom of expression (it is hard to see how it could not be), but whether the interference can be justified.

ES. What if intermediaries had only to reduce the speed at which users’ speech circulates, or limit amplification by tools such as likes and retweets?

SL. Still an interference. People like to say that freedom of speech is not freedom of reach, but that is just a slogan. If the state interferes with the means by which speech is disseminated or amplified, it engages the right of freedom of expression. Confiscating a speaker’s megaphone at a political rally is an obvious example. Cutting off someone’s internet connection is another. The Supreme Court of India said earlier this month, in a judgment about the Kashmir internet shutdown:
“There is no dispute that freedom of speech and expression includes the right to disseminate information to as wide a section of the population as is possible. The wider range of circulation of information or its greater impact cannot restrict the content of the right nor can it justify its denial."

Seizing a printing press is not exempted from interference because the publisher has the alternative of handwriting. Freedom of speech is not just freedom to whisper.

ES: So you can’t just redefine a right so as to avoid justifying the interference?

SL. Exactly. It is dangerous to elide the scope of the right and justified interference; and it is easy to slip into that kind of shorthand: “Freedom X does not amount to the right to do Y.” The right to do Y is almost always a question of whether it is justifiable to prevent Y in particular factual circumstances, not whether Y is generally excluded from protection.

The Carnegie fundamental freedoms paper says: “Requiring platforms to think about the factors they take into account when prioritising content (and the side effects of that) is unlikely to engage a user’s rights; it is trite to say that freedom of speech does not equate to the right to maximum reach”. However it is one thing to justify intervening so as to reduce reach. It is quite another to argue that the user’s freedom of speech rights are not engaged at all.

ES. So the regulator could justify restricting the provision or use of amplification tools?

SL. If the interference is necessary in pursuance of a legitimate public policy aim, and is proportionate to that aim, then it can be justified. Proportionality involves a balancing exercise with other rights, and consideration of safeguards against abuse. The more serious the interference, the greater the justification required.

ES. That sounds a bit vague.

SL. Yes. It gets more so when we add the notion that the state may have a positive obligation to take action to secure a right. That commonly occurs with rights that may conflict with freedom of expression, such as privacy or freedom of religion.  We can even end up arguing that the state is obliged to inhibit some people’s speech in order to secure the genuine, effective right of others to enjoy their own right of expression; to ensure that the right is not illusory; or to secure the availability of a broad range of information.

ES. The state is required to destroy the village in order to save it?

SL.  Not quite, but increasingly the coercive powers of the state are regarded as the means of securing freedom of expression rather than as a threat to it. 

So Carnegie questions whether removing a retweet facility is really a violation of users' rights to formulate their own opinion and express their views, or rather - to the contrary - a mechanism to support those rights by slowing them down so that they can better appreciate content, especially as regards onward sharing.

The danger with conceptualising fundamental rights as a collection of virtuous swords jostling for position in the state’s armoury is that we lose focus on their core role as a set of shields creating a defensive line against the excesses and abuse of state power.

ES. So can we rely on fundamental rights to preserve the village?

SL. The problem is that there are so few red lines. The village can start to sink into a quagmire of competing rights that must be balanced with each other.

The Carnegie fundamental freedoms paper well illustrates the issue. It is a veritable morass of juxtaposed rights, culminating with the inevitable, but not greatly illuminating, conclusion that interferences may be justified if a fair balance is maintained between conflicting rights, and suggesting that the state may have positive obligations to intervene. That is not a criticism of the paper. It is how European human rights law has developed.

ES. You mentioned the ‘prescribed by law’ requirement. How does that apply?

SL. ‘Prescribed by law’ is the first step in assessing Convention compatibility of an interference. If you don’t pass that step you don’t move on to legitimate objective, necessity and proportionality. Prescribed by law means not just issued by the legislature and publicly accessible, but having the quality of law.

Quality of law is an articulation of the rule of law. Restrictions must be framed sufficiently clearly and precisely that someone can, in advance, know with reasonable certainty whether their conduct is liable to have legal consequences. In the context of the duty of care, the legal consequence is that a user’s speech is liable to be the subject of preventive, inhibiting or other action by a platform operator carrying out its duty.

Quality of law is a particular concern for discretionary state powers, which by their nature are liable to be exercised ad hoc and arbitrarily in the absence of clear rules.

ES. Discretionary power would be relevant to the regulator, then.

SL. Yes. As we have seen, the proposed regulator would have very wide discretion to decide what constitutes harm, and then what steps intermediaries should take to reduce or prevent risk of harm. You can build in consultation and 'have regard' obligations, but that doesn't change the nature of the task.

ES. Couldn’t the quality of law gap be filled by the regulator’s Codes of Practice?

SL. It is certainly possible for non-statutory material such as Codes of Practice to be taken into account in assessing certainty and precision of the law. Some would say that the ability of the regulator to fill the gap simply highlights the extent to which the draft Bill would delegate power over individual speech to the regulator. 

ES. Isn’t that a doctrinaire rather than a legal objection?

SL. Partly so. However, in the US ACLU v Reno case in the early 1990s the US courts took a medium-specific approach, holding that the kinds of restriction that might be justified for broadcast were not so for individual speech on the internet. So the appropriateness of the very model of discretionary broadcast regulation could come into question when applied to individual speech. It bears repeating that the duty of care affects the speech of individuals.

ES. Carnegie says that its emerging model would ‘champion the rule of law online’.

SL. The essence of the rule of law is “the restriction of the arbitrary exercise of power by subordinating it to well-defined and established laws” (OED). That is characteristic of the general law, but not of discretionary regulation.

If anything, regulation by regulator presents a challenge to the rule of law. The raison d’etre is to maximise the power, flexibility and discretion of the regulator.  Some may think that is a good approach. But good approach or bad, champion the rule of law it does not.

ES. Thank you.