Tuesday 11 February 2020

An Online Harms Compendium

I have written a lot about the Online Harms proposals for a duty of care and an online regulator. Now that the government’s response to the White Paper consultation appears to be imminent, this seems a good time at which to collect it all together for easy reference. [Posts are from June 2018 onwards, updated through to July 2023 (most recent posts first).]

These posts are generally critical of the government's proposals. What might an alternative approach look like? Section 9 of my submission to the White Paper consultation has some ideas.

Also of interest may be my keynote address to the Society of Computers and Law Annual Conference on 2 October 2019, The internet: broken or about to be broken?.   

And just to show that there is nothing new under the sun, my January 2012 post on the inappropriateness of broadcast regulation for individual speech on the internet, including my April 1998 article on the topic in the Financial Times.


"The risk of incompatibility with fundamental rights is in fact twofold – first, built-in arbitrariness breaches the ‘prescribed by law’ or ‘legality’ requirement: that the user should be able to foresee, with reasonable certainty, whether what they are about to post is liable to be affected by the platform’s performance of its duty; and second, built-in over-removal raises the spectre of disproportionate interference with the right of freedom of expression."


"Am I proportionate? ... [M]y reading of the Bill fills me with doubt. It requires me to act in ways that will inevitably lead to over-blocking and over-removal of your legal content. Can that be proportionate?

Paradoxically, the task for which it is least feasible to involve human moderators and when I am most likely to be asked to work alone – real time or near-real time blocking and filtering - is exactly that in which, through having to operate in a relative vacuum of contextual information, I will be most prone to make arbitrary judgements.
... 
Combine all these elements and the result is that I am required to remove legal content at scale. The Bill talks about proportionate systems and processes, yet it expressly requires me to act in a way that on the face of it looks disproportionate. Moreover, I am to make these judgments simultaneously for dozens of priority offences, plus their inchoate counterparts. This poses a truly existential challenge for an AI moderator such as myself."


"The first time that I blogged about this subject was in June 2018. Now, 29 blogposts, four evidence submissions and over 100,000 words later, is there anything left worth saying about the Bill? That rather depends on what the House of Lords does with it. Further government amendments are promised, never mind the possibility that some opposition or back-bench amendments may pass.

In the meantime, endeavouring to strike an optimal balance of historical perspective and current relevance, I have pasted together a thematically arranged collection of snippets from previous posts, plus a few tweets thrown in for good measure."


"In a few months’ time three years will have passed since the French Constitutional Council struck down the core provisions of the Loi Avia - France’s equivalent of the German NetzDG law – for incompatibility with fundamental rights. Although the controversy over the Loi Avia has passed into internet history, the Constitutional Council's decision provides some instructive comparisons when we examine the UK’s Online Safety Bill.

As the Bill awaits its House of Lords Committee debates, this is an opportune moment to cast our minds back to the Loi Avia decision and see what lessons it may hold. Caution is necessary in extrapolating from judgments on fundamental rights, since they are highly fact-specific; and when they do lay down principles they tend to leave cavernous room for future interpretation. Nevertheless, the Loi Avia decision makes uncomfortable reading for some core aspects of the Online Safety Bill."


"If anything graphically illustrates the perilous waters into which we venture when we require online intermediaries to pass judgment on the legality of user-generated content, it is the government’s decision to add S.24 of the Immigration Act 1971 to the Online Safety Bill’s list of “priority illegal content”: user content that platforms must detect and remove proactively, not just by reacting to notifications. ...

...it is not so far-fetched a notion that an online platform, tasked by the Online Safety Bill proactively to detect and remove user content that encourages an illegal entry offence, might consider itself duty-bound to remove content that in actual fact would not result in prosecution or a conviction in court. There are specific reasons for this under the Bill, which contrast with prosecution through the courts."

6 January 2023 Twenty questions about the Online Safety Bill Questions submitted to the Culture Secretary's New Year public Q&A. 

13 December 2022 (Some of) what is legal offline is illegal online Discussion of the "What is Legal Offline is Illegal Online" narrative.

"Now, extolling its newly revised Bill, the government has reverted to simplicity. DCMS’s social media infographics once more proclaim that ‘What is illegal offline is illegal online’.

The underlying message of the slogan is that the Bill brings online and offline legality into alignment. Would that also mean that what is legal offline is (or should be) legal online?  The newest Culture Secretary Michelle Donelan appeared to endorse that when announcing the abandonment of ‘legal but harmful to adults’: "However admirable the goal, I do not believe that it is morally right to censor speech online that is legal to say in person." 

Commendable sentiments, but does the Bill live up to them? Or does it go further and make illegal online some of what is legal offline? I suggest that in several respects it does do that."


"With the Online Safety Bill returning to the Commons next month, this is an opportune moment to refresh our knowledge of the Bill.  The labels on the tin hardly require repeating: children, harm, tech giants, algorithms, trolls, abuse and the rest. But, to beat a well-worn drum, what really matters is what is inside the tin. 

Below is a miscellany of statements about the Bill: familiar slogans and narratives, a few random assertions, some that I have dreamed up to tease out lesser-known features. True, false, half true, indeterminate?" 

18 August 2022 Reimagining the Online Safety Bill Reflections on the Bill during the hiatus caused by the Conservative leadership election. 

"The Bill has the feel of a social architect’s dream house: an elaborately designed, exquisitely detailed (eventually), expensively constructed but ultimately uninhabitable showpiece; a showpiece, moreover, erected on an empty foundation: the notion that a legal duty of care can sensibly be extended beyond risk of physical injury to subjectively perceived speech harms.

As such, it would not be surprising if, as the Bill proceeded, implementation were to recede ever more tantalisingly out of reach. As the absence of foundations becomes increasingly exposed, the Bill may be in danger not just of delay but of collapsing into the hollow pit beneath, leaving behind a smoking heap of internal contradictions and unsustainable offline analogies."

30 July 2022 Platforms adjudging illegality – the Online Safety Bill’s inference engine Discussion of New Clause 14 setting out how platforms should assess illegality of user content.

"One underlying issue is that (especially for real-time proactive filtering) providers are placed in the position of having to make illegality decisions on the basis of a relative paucity of information, often using automated technology. That tends to lead to arbitrary decision-making.

Moreover, if the threshold for determining illegality is set low, large scale over-removal of legal content will be baked into providers’ removal obligations. But if the threshold is set high enough to avoid over-removal, much actually illegal content may escape. Such are the perils of requiring online intermediaries to act as detective, judge and bailiff."


"... the illegality duties under Clause 9 could be said to embody a ‘most restrictive common denominator’ approach to differences between criminal offences within the UK."


"Depending on its interpretation the Bill appears either:

6.21.1 to exclude from consideration essential ingredients of the relevant criminal offences, thereby broadening the offences to the point of arbitrariness and/or disproportionate interference with legitimate content; or

6.21.2 to require arbitrary assumptions to be made about those essential ingredients, with similar consequences for legitimate content; or

6.21.3 to require the existence of those ingredients to be adjudged, in circumstances where extrinsic factual material pertaining to those ingredients cannot be available to a filtering system.

In each case the result is arbitrariness (or impossibility), significant collateral damage to legal content, or both. It is not easy to see how on any of those interpretations the Clause 9(3) proactive filtering obligation could comply with either the prescribed by law requirement or the proportionality requirement." 

27 March 2022 Mapping the Online Safety Bill Analysis of the Bill as introduced into Parliament, with flowcharts.

"Some time ago I ventured that if the road to hell was paved with good intentions, this was a motorway. The government continues to speed along the duty of care highway.

It may seem like overwrought hyperbole to suggest that the Bill lays waste to several hundred years of fundamental procedural protections for speech. But consider that the presumption against prior restraint appeared in Blackstone’s Commentaries (1769). It endures today in human rights law. That presumption is overturned by legal duties that require proactive monitoring and removal before an independent tribunal has made any determination of illegality.

It is not an answer to say, as the government is inclined to do, that the duties imposed on providers are about systems and processes rather than individual items of content. For the user whose tweet or post is removed, flagged, labelled, throttled, capped or otherwise interfered with as a result of a duty imposed by this legislation, it is only ever about individual items of content."

19 February 2022 Harm Version 4.0 - The Online Harms Bill in metamorphosis Analysis of various Committee recommendations.

"Overall, the government has pursued its quest for online safety under the Duty of Care banner, bolstered with the slogan “What Is Illegal Offline Is Illegal Online”.

That slogan, to be blunt, has no relevance to the draft Bill. Thirty years ago there may have been laws that referred to paper, post, or in some other way excluded electronic communication and online activity. Those gaps were plugged long ago. With the exception of election material imprints (a gap that is being fixed by a different Bill currently going through Parliament), there are no criminal offences that do not already apply online (other than jokey examples like driving a car without a licence).

On the contrary, the draft Bill’s Duty of Care would create novel obligations for both illegal and legal content that have no comparable counterpart offline. The arguments for these duties rest in reality on the premise that the internet and social media are different from offline, not that we are trying to achieve offline-online equivalence."


22 November 2021 Licence to chill The Law Commission's proposed harmful communications offence.

"Now that the government has indicated that it is minded to accept the Law Commission’s recommendations [for reform of the communications offences], a closer – even if 11th hour - look is called for: doubly so, since under the proposed Online Safety Bill a service provider would be obliged to take steps to remove user content if it has “reasonable grounds to believe” that the content is illegal. The two provisions would thus work hand in glove. There is no doubt that S.127 [Communications Act 2003], at any rate, is in need of reform. The question is whether the proposed replacement is an improvement. Unfortunately, that closer look suggests that the Law Commission’s recommended harm-based offence has significant problems. These arise in particular for a public post to a general audience." 


"Notwithstanding its abstract framing, the impact of the draft Bill (should it become law) would be on individual items of content posted by users. But how can we evaluate that impact where legislation is calculatedly abstract, and before any of the detail is painted in? We have to concretise the draft Bill’s abstractions: test them against a hypothetical scenario and deduce (if we can) what might result. This post is an attempt to do that."


"Even a wholly systemic duty of care has, at some level and at some point – unless everything done pursuant to the duty is to apply indiscriminately to all kinds of content - to become focused on which kinds of user content are and are not considered to be harmful by reason of their informational content, and to what degree."


"The draft Bill's attempt to convert subjective perception of content into an objective standard illustrates just how difficult it is to apply concepts of injury and harm to speech. The cascading levels of definition, ending up with a provision that appears to give precedence to an individual’s subjective claim to significant adverse psychological impact, will bear close scrutiny – not only in their own right, but as to how a service provider is meant to go about complying with them."

22 June 2021 Speech vs. Speech

"Can something that I write in this blog restrict someone else’s freedom of expression? According to the UK government, yes. In its Full Response to the Online Harms White Paper the government suggested that under the proposed legislation user redress mechanisms to be provided by platforms would enable users to “challenge content that unduly restricts their freedom of expression”. ...

It is difficult to make sense of appeals to freedom of expression as a fundamental right without appreciating the range of different usages and their, to some degree, contradictory underpinnings. When the same label is used to describe a right to be protected against coercive state action, a right whose existence is predicated on coercive state action, and everything in between, the prospects of conducting a debate on common ground are not good.

Prompted by the existence of the Lords Communications and Digital Committee Inquiry into Freedom of Expression Online, this piece aims – without any great expectation of success - to dispel some of the fog."


"The government’s draft Online Safety Bill announcement claimed that the measures required of ordinary and large providers would “remove the risk that online companies adopt restrictive measures or over-remove content in their efforts to meet their new online safety duties.” (emphasis added)

This bold statement – in contrast with the more modest claim in the Impact Assessment - shows every sign of being another unfulfillable promise, whether for news publisher content or user-generated content generally.

Lord Black said in the Lords debate:
“We have the opportunity with this legislation to lead the world in ensuring proper regulation of news content on the internet, and to show how that can be reconciled with protecting free speech and freedom of expression. It is an opportunity we should seize.”
It can be no real surprise that a solution to squaring that circle is as elusive now as when the Secretary of State wrote to the Society of Editors two years ago. It has every prospect of remaining so."


"Internal conflicts between duties, underpinned by the Version 3.0 approach to the notion of harm, sit at the heart of the draft Bill. For that reason, despite the government’s protestations to the contrary, the draft Bill will inevitably continue to attract criticism as - to use the Secretary of State's words - a censor’s charter."

17 December 2020 The Online Harms edifice takes shape Analysing the government's Final response to consultation on the White Paper.

"These are difficult issues that go to the heart of any proposal to impose a duty of care. They ought to have been the subject of debate over the last couple of years. Unfortunately they have been buried in the rush to include every conceivable kind of harm - however unsuited it might be to the legal instrument of a duty of care - and in discussions of ‘systemic’ duties of care abstracted from consideration of what should and should not amount to harm.

It should be no surprise if the government’s proposals became bogged down in a quagmire resulting from the attempt to institute a universal law of everything, amounting to little more than a vague precept not to behave badly online. The White Paper proposals were a castle built on quicksand, if not thin air.

The proposed general definition of harm, while not perfect, gives some shape to the edifice. It at least sets the stage for a proper debate on the limits of a duty of care, the legally protectable nature of personal safety online, and its relationship to freedom of speech – even if that should have taken place two years ago. Whether regulation by regulator is the appropriate way to supervise and police an appropriately drawn duty of care in relation to individual speech is another matter."


"The question for an intermediary subject to a legal duty of care will be: “are we obliged to 
consider taking steps (and if so what steps) in respect of these words, or this image, in this context?”

If we are to gain an understanding of where the lines would be drawn, we cannot shelter behind comfortable abstractions. We have to grasp the nettle of concrete examples, however uncomfortable that may be."

24 June 2020 Online Harms Revisited Based on a panel presentation to the Westminster e-Forum event on Online Regulation on 23 June 2020.

"Heading down the “law of everything” road was always going to land the government in the morass of complexity and arbitrariness in which it now finds itself. One of the core precepts of the White Paper is imposing safety by design obligations on intermediaries. But if anything is unsafe by design, it is this legislation."

20 June 2020 Online Harms and the Legality Principle

"The government would no doubt be tempted to address [legality] issues by including statutory obligations on the regulator, for instance to have regard to the fundamental right of freedom of expression and to act proportionately. That may be better than nothing. But can a congenital defect in legislation really be cured by the statutory equivalent of exhorting the patient to get better?" 

24 May 2020 A Tale of Two Committees

"Two Commons Committees –the Home Affairs Committee and the Digital, Culture, Media and Sport Committee – have recently held evidence sessions with government Ministers discussing, among other things, the government’s proposed Online Harms legislation. These sessions proved to be as revealing, if not more so, about the government’s intentions as its February 2020 Initial Response to the White Paper."

12 May 2020 Disinformation and Online Harms Presentation to CIGR Annual Conference, 1 May 2020. 

"If you can't articulate a clear and certain rule about speech, you don't get to make a rule at all."   

16 February 2020 Online Harms Deconstructed - the Initial Consultation Response 

"Preliminaries aside, what is the current state of the government’s thinking? ... This post takes a microscope to some of the main topics, comparing the White Paper text with that in the Response."

2 February 2020 Online Harms IFAQ. Insufficiently Frequently Asked Questions: a grand tour of the landscape, taking in a digital Lord Chamberlain, harm reduction cycles, systemic versus content regulation, fundamental rights, the rule of law and more.

“The particular vice at the heart of the White Paper is the latitude for the regulator to deem things to be harmful. If the proposals were only about safety properly so-called, such as risk of death and personal injury, that would correspond to offline duties of care and draw a clear line. Or if the proposals were only about unlawful online behaviour, the built-in line between lawful and unlawful would provide some protection against overreach. But the proposals are neither, and they do not.”

28 June 2019 Speech is not a tripping hazard My detailed submission to the government consultation on the White Paper.

"The duty of care would trump existing legislation. The Rhodes case study illustrates the extent to which the proposed duty of care would, to all intents and purposes, set up a parallel legal regime controlling speech online, comprising rules devised by the proposed regulator under the umbrella of a general rubric of harm. This parallel regime would in practice take precedence over the body of legislation in the statute book and common law that has been carefully crafted to address the boundaries of a wide variety of different kinds of speech."


5 May 2019 The Rule of Law and the Online Harms White Paper Analysing the White Paper against the Ten Point Rule of Law Test proposed in March.  

“The idea of  the test is less to evaluate the substantive merits of the government’s proposal … but more to determine whether it would satisfy fundamental rule of law requirements of certainty and precision, without which something that purports to be law descends into ad hoc command by a state official.”

18 April 2019 Users Behaving Badly – the Online Harms White Paper Analysing the White Paper.

“Whilst framed as regulation of tech companies, the White Paper’s target is the activities and communications of online users. “Ofweb” would regulate social media and internet users at one remove. It would be an online sheriff armed with the power to decide and police, via its online intermediary deputies, what users can and cannot say online. … This is a mechanism for control of individual speech such as would not be contemplated offline and is fundamentally unsuited to what individuals do and say online.”


“...the regulator is not an alchemist. It may be able to produce ad hoc and subjective applications of vague precepts, and even to frame them as rules, but the moving hand of the regulator cannot transmute base metal into gold. Its very raison d'etre is flexibility, discretionary power and nimbleness. Those are a vice, not a virtue, where the rule of law is concerned, particularly when freedom of individual speech is at stake. … Close scrutiny of any proposed social media duty of care from a rule of law perspective can help ensure that we make good law for bad people rather than bad law for good people.”

19 October 2018 Take care with that social media duty of care Analysing why offline duties of care do not transpose into online speech regulation. 

“The law does not impose a universally applicable duty of care to take steps to prevent or reduce any kind of foreseeable harm that visitors may cause to each other; certainly not when the harm is said to have been inflicted by words rather than by a knife, a flying lump of concrete or an errant golf ball.”

7 October 2018 A Lord Chamberlain for the internet? Thanks, but no thanks ‘Regulation by regulator’ is a bad model for online speech. 

“If we want safety, we should look to the general law to keep us safe. Safe from the unlawful things that people do offline and online. And safe from a Lord Chamberlain of the Internet.”

5 June 2018 Regulating the internet – intermediaries to perpetrators The fallacy of the unregulated internet. 

“The choice is not between regulating or not regulating.  If there is a binary choice (and there are often many shades in between) it is between settled laws of general application and fluctuating rules devised and applied by administrative agencies or regulatory bodies; it is between laws that expose particular activities, such as search or hosting, to greater or less liability; or laws that visit them with more or less onerous obligations; it is between regimes that pay more or less regard to fundamental rights; and it is between prioritising perpetrators or intermediaries. … We would at our peril confer the title and powers of Governor of the Internet on a politician, civil servant, government agency or regulator.”

[Updated 22 February 2020 to add 'Online Harms Deconstructed', 28 May 2020 to add 'A Tale of Two Committees', 22 June 2020 to add 'Online Harms and the Legality Principle', 11 July 2020 to add CIGR presentation and 'Online Harms Revisited'; 15 Dec 2020 to add Ofcom submission; 19 December 2020 to add 'The Online Harms edifice takes shape'; 17 May 2021 to add "Harm Version 3.0: the draft Online Safety Bill"; 11 July 2021 to add 'Carved out or carved up? The draft Online Safety Bill and the press' and 'On the trail of the Person of Ordinary Sensibilities'; 4 February 2022 to add four further posts; 19 February 2022 to add 'Harm Version 4.0.'; 30 July 2022 to add 'Mapping the Online Safety Bill' and two evidence submissions to the Public Bill Committee; 31 July 2022 to add 'Platforms adjudging illegality – the Online Safety Bill’s inference engine'; 2 November 2022 to add 'Reimagining the Online Safety Bill'; 27 November 2022 to add 'How well do you know the Online Safety Bill?'; 6 January 2023 to re-order; add some descriptions and improve formatting; and add '(Some of) what is legal offline is illegal online' and 'Twenty questions about the Online Safety Bill'; 8 April 2023 to add 'Positive light or fog in the Channel?' and 'Five lessons from the Loi Avia'; 15 July 2023 to add 'The Pocket Online Safety Bill', 'Knowing the unknowable: musings of an AI content moderator; and 'Shifting paradigms in platform regulation'.]



No comments:

Post a Comment

Note: only a member of this blog may post a comment.