Assailed from all quarters for being not tough enough, for being too tough, for being fundamentally misconceived, for threatening freedom of expression, for technological illiteracy, for threatening privacy, for excessive Ministerial powers, or occasionally for the sin of not being some other Bill entirely – and yet enjoying almost universal cross-party Parliamentary support - the UK’s Online Safety Bill is now limping its way through the House of Lords. It starts its Committee stage on 19 April 2023.
This monster Bill runs to almost 250 pages. It is beyond reasonable hope that anyone coming to it fresh can readily assimilate all its ins and outs. Some features are explicable only with an understanding of its tortuous history, stretching back to the Internet Safety Strategy Green Paper in 2017 via the Online Harms White Paper of April 2019, the draft Bill of May 2021 and the changes following the Conservative leadership election last summer. The Bill has evolved significantly, shedding and adding features as it has been buffeted by gusting political winds, all the while (I would say) teetering on defectively designed foundations.
The first time that I blogged about this subject was in June 2018. Now, 29 blogposts, four evidence submissions and over 100,000 words later, is there anything left worth saying about the Bill? That rather depends on what the House of Lords does with it. Further government amendments are promised, never mind the possibility that some opposition or back-bench amendments may pass.
In the meantime, endeavouring to strike an optimal balance of historical perspective and current relevance, I have pasted together a thematically arranged collection of snippets from previous posts, plus a few tweets thrown in for good measure.
This exercise has the merit, at the price of some repetition, of highlighting long-standing issues with the Bill. I have omitted topics that made a brief walk-on appearance only to retreat into the wings (my personal favourite is the Person of Ordinary Sensibilities). Don’t expect to find every aspect of the Bill covered: you won’t find much on age-gating, despite (or perhaps because of) the dominant narrative that the Bill is about protecting children. My interest has been more in illuminating significant issues that have tended to be submerged beneath the slow motion stampede to do something about the internet.
In April 2019, after reading the White Paper, I said: “If the road to hell is paved with good intentions, this is a motorway.” That opinion has not changed.
Nor has this assessment, three years later in August 2022: "The Bill has the feel of a social architect’s dream house: an elaborately designed, exquisitely detailed (eventually), expensively constructed but ultimately uninhabitable showpiece; a showpiece, moreover, erected on an empty foundation: the notion that a legal duty of care can sensibly be extended beyond risk of physical injury to subjectively perceived speech harms.”
If you reckon to know the Bill, try my November 2022 quiz or take a crack at answering the twenty questions that I posed to the Secretary of State’s New Year Q&A (of which one question has been answered, by publication of a revised ECHR Memorandum). Otherwise, read on.
The Bill visualised
These six flowcharts illustrate the Bill’s core safety duties and powers as they stand now.
U2U Illegality Duties
Search Illegality Duties
Proactive detection duties and powers (U2U and search)
Big tech and the evil algorithm
State of Play: A continuing theme of the online harms debate has been the predominance of narratives, epitomised by the focus on Big Tech and the Evil Algorithm which has tended to obscure the broad scope of the legislation. On the figures estimated by the government's Impact Assessment, 80% of UK service providers in scope will be microbusinesses, employing between 1 and 9 people. A back bench amendment tabled in the Lords proposes to exempt SMEs from the Bill's duties.
October 2018: “When governments talk about regulating online platforms to prevent harm it takes no great leap to realise that we, the users, are the harm that they have in mind.” A Lord Chamberlain for the internet? Thanks, but no thanks
April 2019:
June 2021:
Feb 2022: “It might be argued that some activities (around algorithms, perhaps) are liable to create risks that, by analogy with offline, could justify imposing a preventative duty. That at least would frame the debate around familiar principles, even if the kind of harm involved remained beyond bounds.
Had the online harms debate been conducted in those terms, the logical conclusion would be that platforms that do not do anything to create relevant risks should be excluded from scope. But that is not how it has proceeded. True, much of the political rhetoric has focused on Big Tech and Evil Algorithm. But the draft Bill goes much further than that. It assumes that merely facilitating individual public speech by providing an online platform, however basic that might be, is an inherently risk-creating activity that justifies imposition of a duty of care. That proposition upends the basis on which speech is protected as a fundamental right.” Harm Version 4.0 - The Online Harms Bill in metamorphosis
March 2022:
Nov 2022:
Duties of care
State of Play The idea that platforms should be subject to a duty of care analogous to safety duties owed by occupiers of physical spaces took hold at an early stage of the debate, fuelling a long-running eponymous campaign by The Daily Telegraph. Unfortunately, the analogy was always a deeply flawed foundation on which to legislate for speech - something that has become more and more apparent as the government has grappled with the challenges of applying it to the online space. Perhaps recognising these difficulties, the government backed away from imposing a single overarching duty of care in favour of a series of more specific (but still highly abstract) duties. A recent backbench Lords amendment would restrict the Bill's general definition of 'harm' to physical harm, omitting psychological harm.
October 2018:
October 2018:
October 2018:
October 2018:
June 2019:
June 2019:
June 2019: “The notion of a duty of care is as common in everyday parlance as it is misunderstood. In order to illustrate the extent to which the White Paper abandons the principles underpinning existing duties of care, and the serious problems to which that would inevitably give rise, this submission begins with a summary of the role and ambit of safety-related duties of care as they currently exist in law. …
The purely preventive, omission-based kind of duty of care in respect of third party conduct contemplated by the White Paper is exactly that which generally does not exist offline, even for physical injury. The ordinary duty is to avoid inflicting injury, not to prevent someone else from inflicting it.” Speech is not a tripping hazard
June 2020:
August 2022:
Systems and
processes or Individual Items of Content?
State of Play An often repeated theme is that the Bill is (or should be) about design of systems and processes, not about content moderation. This is not easy to pin down in concrete terms. If the idea is that there are features of services that are intrinsically risky, regardless of the content involved, does that mean that (for instance) Ofcom should be able to recommend banning functionality such as (say) quote posting? Would a systems and processes approach suggest that nothing in the Bill should require a platform to make a judgement about the harmfulness or illegality of individual items of user content?
On a different tack, the government argues that the Bill is indeed focused on systems and processes, and that service providers would not be sanctioned for individual content decisions. In the meantime, the Government's Impact Assessment estimates that the increased content moderation required by the Bill would cost around £1.9 billion over 10 years. Whatever the pros and cons of a systems and processes approach, the Bill is largely about content moderation.
September 2020:
November 2021: "Even a wholly systemic duty of care has, at some level and at some point – unless everything done pursuant to the duty is to apply indiscriminately to all kinds of content - to become focused on which kinds of user content are and are not considered to be harmful by reason of their informational content, and to what degree.
To take one example, Carnegie discusses repeat delivery of self-harm content due to personalisation systems. If repeat delivery per se constitutes the risky activity, then inhibition of that activity should be applied in the same way to all kinds of content. If repeat delivery is to be inhibited only, or differently, for particular kinds of content, then the duty additionally becomes focused on categories of content. There is no escape from this dichotomy." The draft Online Safety Bill: systemic or content-focused?
November 2021: “The decisions that service providers would have to make – whether automated, manual or a combination of both – when attempting to implement content-related safety duties, inevitably concern individual items of user content. The fact that those decisions may be taken at scale, or are the result of implementing systems and processes, does not change that.
For every item of user content putatively subject to a filtering, take-down or other kind of decision, the question for a service provider seeking to discharge its safety duties is always what (if anything) should be done with this item of content in this context? That is true regardless of whether those decisions are taken for one item of content, a thousand, or a million; and regardless of whether, when considering a service provider’s regulatory compliance, Ofcom is focused on evaluating the adequacy of its systems and processes rather than with punishing service providers for individual content decision failures.” The draft Online Safety Bill: systemic or content-focused?
November 2021:
November 2021:
November 2022.
Platforms adjudging
illegality
State of Play The Bill’s illegality duties are mapped out in the U2U and search engine diagrams in the opening section. The Bill imposes both reactive and proactive duties on providers. The proactive duties require platforms to take measures to prevent users encountering illegal content, encompassing the use of automated detection and removal systems. It a platform becomes aware of illegal content it must swiftly remove it.
In the present iteration of the Bill the platform (or its automated systems) must treat content as illegal if it has reasonable grounds to infer, on the basis of all information reasonably available to it, that the content is illegal. That is stipulated in Clause 170, which was introduced in July 2022 as New Clause 14. A backbench Lords amendment would raise the threshold to manifest illegality.
June 2019: “In some kinds of case … illegality will be manifest. For most categories it will not be, for any number of reasons. The alleged illegality may be debatable as a matter of law. It may depend on context, including factual matters outside the knowledge of the intermediary. The relevant facts may be disputed. There may be available defences, including perhaps public interest. Illegality may depend on the intention or knowledge of one of the parties. And so it goes on. …
If there were to be any kind of positive duty to remove illegal material of which an intermediary becomes aware, it is unclear why that should go beyond material which is manifestly illegal on the face of it. If a duty were to go beyond that, consideration should be given to restricting it to specific offences that either impinge on personal safety (properly so called) or, for sound reasons, are regarded as sufficiently serious to warrant a separate positive duty which has the potential to contravene the presumption against prior restraint.” Speech is not a tripping hazard
February 2020:
February 2022: “It may seem obvious that illegal content should be removed, but that overlooks the fact that the draft Bill would require removal without any independent adjudication of illegality. That contradicts the presumption against prior restraint that forms a core part of traditional procedural protections for freedom of expression.
… The draft Bill provides that the illegality duty should be triggered by ‘reasonable grounds to believe’ that the content is illegal. It could have adopted a much higher threshold: manifestly illegal on the face of the content, for instance. The lower the threshold, the greater the likelihood of legitimate content being removed at scale, whether proactively or reactively.
The draft Bill raises serious (and already well-known, in the context of existing intermediary liability rules) concerns of likely over-removal through mandating platforms to detect, adjudge and remove illegal material on their systems. Those are exacerbated by adoption of the ‘reasonable grounds to believe’ threshold.” Harm Version 4.0 - The Online Harms Bill in metamorphosis
March 2o22: “The problem with the “reasonable grounds to believe” or similar threshold is that it expressly bakes in over-removal of lawful content. …
This illustrates the underlying dilemma that arises with imposing removal duties on platforms: set the duty threshold low and over-removal of legal content is mandated. Set the trigger threshold at actual illegality and platforms are thrust into the role of judge, but without the legitimacy or contextual information necessary to perform the role; and certainly without the capability to perform it at scale, proactively and in real time.” Mapping the Online Safety Bill
March 2022: “This analysis may suggest that for a proactive monitoring duty founded on illegality to be capable of compliance with the [ECHR] ‘prescribed by law’ requirement, it should be limited to offences the commission of which can be adjudged on the face of the user content without recourse to further information.
Further, proportionality considerations may lead to the perhaps stricter conclusion that the illegality must be manifest on the face of the content without requiring the platform to make any independent assessment of the content in order to find it unlawful. …
The [government’s ECHR] Memorandum does not address the arbitrariness identified above in relation to proactive illegality duties, stemming from an obligation to adjudge illegality in the legislated or inevitable practical absence of material facts. Such a vacuum cannot be filled by delegated powers, by an Ofcom code of practice, or by stipulating that the platform’s systems and processes must be proportionate.” Mapping the Online Safety Bill
May 2022:
July 2022:
July 2022 “In truth it is not so much NC14 itself that is deeply problematic, but the underlying assumption (which NC14 has now exposed) that service providers are necessarily in a position to determine illegality of user content, especially where real time automated filtering systems are concerned. …
It bears emphasising that these issues around an illegality duty should have been obvious once an illegality duty of care was in mind: by the time of the April 2019 White Paper, if not before. Yet only now are they being given serious consideration.” Platforms adjudging illegality – the Online Safety Bill’s inference engine
November 2022: “The current version of the Bill sets ‘reasonable grounds to infer’ as the platform’s threshold for adjudging illegality.
Moreover, unlike a court that comes to a decision after due consideration of all the available evidence on both sides, a platform will be required to make up its (or its algorithms') mind about illegality on the basis of whatever information is available to it, however incomplete that may be. For proactive monitoring of ‘priority offences’, that would be the user content processed by the platform’s automated filtering systems. The platform would also have to ignore the possibility of a defence unless they have reasonable grounds to infer that one may be successfully relied upon.
The mischief of a low threshold is that legitimate speech will inevitably be suppressed at scale under the banner of stamping out illegality.” How well do you know the Online Safety Bill?
January 2023:
January 2023:
January 2023: “These problems with the Bill’s illegality duties are not restricted to migrant boat videos or immigration offences… . They are of general application and are symptomatic of a flawed assumption at the heart of the Bill: that it is a simple matter to ascertain illegality just by looking at what the user has posted. There will be some offences for which this is possible (child abuse images being the most obvious), and other instances where the intent of the poster is clear. But for the most part that will not be the case, and the task required of platforms will inevitably descend into guesswork and arbitrariness: to the detriment of users and their right of freedom of expression.
It is strongly arguable that if an illegality duty is to be placed on platforms at all, the threshold for illegality assessment should not be ‘reasonable grounds to infer’, but clearly or manifestly illegal. Indeed, that may be what compatibility with the Article 10 right of freedom of expression requires.” Positive light or fog in the Channel?
Freedom of
expression and Prior Restraint
State of Play: The debate on the effect of the Bill of freedom of expression is perhaps the most polarised of all: the government contending that the Bill sets out to secure freedom of expression in various ways, its critics maintaining that the Bill's duties on service providers will inevitably damage freedom of expression through suppression of legitimate user content. Placing stronger freedom of expression duties on platforms when carrying out their safety duties may be thought to highlight the Bill's deep internal contradictions.
October 2018:
October 2018:
June 2019:
June 2019:
Feb 2020:
Feb 2020: “… increasingly the coercive powers of the state are regarded as the means of securing freedom of expression rather than as a threat to it. So Carnegie questions whether removing a retweet facility is really a violation of users' rights to formulate their own opinion and express their views, or rather - to the contrary - a mechanism to support those rights by slowing them down so that they can better appreciate content, especially as regards onward sharing.
The danger with conceptualising fundamental rights as a collection of virtuous swords jostling for position in the state’s armoury is that we lose focus on their core role as a set of shields creating a defensive line against the excesses and abuse of state power.” Online Harms IFAQ
June 2020:
June 2020: “10 things that Article 19 of the Universal Declaration of Human Rights doesn’t say” (Twitter thread – now 18 things.) Sample:
“6. Everyone has the right to seek, receive and impart information and ideas through any media, always excepting the internet and social media.”
May 2021: "… the danger inherent in the legislation: that efforts to comply with the duties imposed by the legislation would carry a risk of collateral damage by over-removal. That is true not only of ‘legal but harmful’ duties, but also of the moderation and filtering duties in relation to illegal content that would be imposed on all providers.
No obligation to conduct a freedom of expression risk assessment could remove the risk of collateral damage by over-removal. That smacks of faith in the existence of a tech magic wand. Moreover, it does not reflect the uncertainty and subjective judgement inherent in evaluating user content, however great the resources thrown at it.
Internal conflicts between duties... sit at the heart of the draft Bill. For that reason, despite the government’s protestations to the contrary, the draft Bill will inevitably continue to attract criticism as ... a censor’s charter." Harm Version 3.0: the draft Online Safety Bill
June 2021:
June 2021: “[D]iscussions of freedom of expression tend to resemble convoys of ships passing in the night. If, by the right of freedom of expression, Alice means that she should be able to speak without fear of being visited with state coercion; Bob means a space in which the state guarantees, by threat of coercion to the owner of the space, that he can speak; Carol contends that in such a space she cannot enjoy a fully realised right of freedom of expression unless the state forcibly excludes Dan’s repugnant views; and Ted says that irrespective of the state, Alice and Bob and Carol and Dan all directly engage each other’s fundamental right of freedom of expression when they speak to each other; then not only will there be little commonality of approach amongst the four, but the fact that they are talking about fundamentally different kinds of rights is liable to be buried beneath the single term, freedom of expression.
If Grace adds that since we should not tolerate those who are intolerant of others’ views the state should – under the banner of upholding freedom of expression – act against intolerant speech, the circle of confusion is complete.” Speech vs. Speech
November 2021: “A systemic [safety] duty would relate to systems and processes that for whatever reason are to be treated as intrinsically risky.
The question that then arises is what activities are to be regarded as inherently risky. It is one thing to argue that, for instance, some algorithmic systems may create risks of various kinds. It is quite another to suggest that that is true of any kind of U2U platform, even a simple discussion forum. If the underlying assumption of a systemic duty of care is that providing a facility in which individuals can speak to the world is an inherently risky activity, that (it might be thought) upends the presumption in favour of speech embodied in the fundamental right of freedom of expression.” The draft Online Safety Bill: systemic or content-focused?
March 2022:
March 2023:
Rule of law
State of play Once the decision was made to enact a framework designed to give flexibility to a regulator (Ofcom), rule of law concerns around certainty and foreseeability of content rules and decisions were bound to come to the fore. These issues are part and parcel of the government's decided policy approach.
March 2019:
June 2019:
May 2020:
June 2020:
September 2020:
September 2020:
August 2022:
Regulation by
regulator
State of Play A regulatory model akin to broadcast-style regulation by regulator has been part of the government's settled approach from the start. Changing that would require a rethink of the Bill.
June 2018: “The choice is not between regulating or not regulating. If there is a binary choice (and there are often many shades in between) it is between settled laws of general application and fluctuating rules devised and applied by administrative agencies or regulatory bodies; it is between laws that expose particular activities, such as search or hosting, to greater or less liability; or laws that visit them with more or less onerous obligations; it is between regimes that pay more or less regard to fundamental rights; and it is between prioritising perpetrators or intermediaries.
Such niceties can be trampled underfoot in the rush to do something about the internet. Existing generally applicable laws are readily overlooked amid the clamour to tame the internet Wild West, purge illegal, harmful and unacceptable content, leave no safe spaces for malefactors and bring order to the lawless internet. … We would at our peril confer the title and powers of Governor of the Internet on a politician, civil servant, government agency or regulator.” Regulating the internet – intermediaries to perpetrators
October 2018:
October 2018: "It is hard not to think that an internet regulator would be a politically expedient means of avoiding hard questions about how the law should apply to people’s behaviour on the internet. Shifting the problem on to the desk of an Ofnet might look like a convenient solution. It would certainly enable a government to proclaim to the electorate that it had done something about the internet. But that would cast aside many years of principled recognition that individual speech should be governed by the rule of law, not the hand of a regulator.
If we want safety, we should look to the general law to keep us safe. Safe from the unlawful things that people do offline and online. And safe from a Lord Chamberlain of the Internet." A Lord Chamberlain for the internet? Thanks, but no thanks
March 2019:
May 2019:
June 2019:
February 2020: “Consider the days when unregulated theatres were reckoned to be a danger to society and the Lord Chamberlain censored plays. That power was abolished in 1968, to great rejoicing. The theatres were liberated. They could be as rude and controversial as they liked, short of provoking a breach of the peace.
The White Paper proposes a Lord Chamberlain for the internet. Granted, it would be an independent regulator, similar to Ofcom, not a royal official. It might even be Ofcom itself. But the essence is the same. And this time the target would not be a handful of playwrights out to shock and offend, but all of us who use the internet.” Online Harms IFAQ
June 2020:
Analogy wars
October 2018:
October 2018: “[N]o analogy is perfect. Although some overlap exists with the safety-related dangers (personal injury and damage to property) that form the subject matter of occupiers’ liability to visitors and of corresponding common law duties of care, many online harms are of other kinds. Moreover, it is significant that the duty of care would consist in preventing behaviour of one site visitor to another.
The analogy with public physical places suggests that caution is required in postulating duties of care that differ markedly from those, both statutory and common law, that arise from the offline occupier-visitor relationship.” Take care with that social media duty of care
May 2021:
What’s illegal
offline is illegal online
State of Play Amongst all the narratives that have infused the Online Harms debate, the mantra of online-offline equivalence has been one of the longest-running.
February 2022: "Overall, the government has pursued its quest for online safety under the Duty of Care banner, bolstered with the slogan “What Is Illegal Offline Is Illegal Online”.
That slogan, to be blunt, has no relevance to the draft Bill. Thirty years ago there may have been laws that referred to paper, post, or in some other way excluded electronic communication and online activity. Those gaps were plugged long ago. With the exception of election material imprints (a gap that is being fixed by a different Bill currently going through Parliament), there are no criminal offences that do not already apply online (other than jokey examples like driving a car without a licence).
On the contrary, the draft Bill’s Duty of Care would create novel obligations for both illegal and legal content that have no comparable counterpart offline. The arguments for these duties rest in reality on the premise that the internet and social media are different from offline, not that we are trying to achieve offline-online equivalence. " Harm Version 4.0 - The Online Harms Bill in metamorphosis
December 2022: “DCMS’s social media infographics once more proclaim that ‘What is illegal offline is illegal online’.
The underlying message of the slogan is that the Bill brings online and offline legality into alignment. Would that also mean that what is legal offline is (or should be) legal online? The newest Culture Secretary Michelle Donelan appeared to endorse that when announcing the abandonment of ‘legal but harmful to adults’: "However admirable the goal, I do not believe that it is morally right to censor speech online that is legal to say in person."
Commendable sentiments, but does the Bill live up to them? Or does it go further and make illegal online some of what is legal offline? I suggest that in several respects it does do that." (Some of) what is legal offline is illegal online
End-to-End
Encryption
State of Play The issue of end to end encryption, and the allied Ofcom power to require messaging platforms to deploy CSEA scantech, has been a slow burner. It will feature in Lords amendments.
June 2019:
May 2020:
November 2022:
New offences
State of Play The Bill introduces several new offences that could be committed by users. The proposal to enact a new harmful communications offence was dropped after well-founded criticism, but leaving the notorious S.127(1) Communications Act in place. The government is expected to introduce more offences.
A backbench Lords amendment seeks to add the new false and threatening communications offences to the list of priority illegal content that platforms would have to proactively seek out and remove.
March 2022: “The threatening communications offence ought to be uncontroversial. However, the Bill adopts different wording from the Law Commission’s recommendation. That focused on threatening a particular victim (the ‘object of the threat’, in the Law Commission’s language). The Bill’s formulation may broaden the offence to include something more akin to use of threatening language that might be encountered by anyone who, upon reading the message, could fear that the threat would be carried out (whether or not against them).
It is unclear whether this is an accident of drafting or intentional widening. The Law Commission emphasised that the offence should encompass only genuine threats: “In our view, requiring that the defendant intend or be reckless as to whether the victim of the threat would fear that the defendant would carry out the threat will ensure that only “genuine” threats will be within the scope of the offence.” (emphasis added) It was on this basis that the Law Commission considered that another Twitter Joke Trial scenario would not be a concern.” Mapping the Online Safety Bill
February 2023: “Why has the government used different
language from the Law Commission's recommendation for the threatening
communications offence? The concern is that the government’s rewording broadens
the offence beyond the genuine threats that the Law Commission intended should
be captured. The spectre of the Twitter Joke Trial hovers in the wings.” (Twitter
thread)
Extraterritoriality
State of Play The territorial reach of the Bill has attracted relatively little attention. As a matter of principle territorial overreach is to be deprecated, not least because it encourages similar lack of jurisdictional self-restraint on the part of other countries.
December 2020:
March 2022: “The Bill maintains the previous enthusiasm of the draft Bill to legislate for the whole world.
The safety duties adopt substantially the same expansive definition of ‘UK-linked’ as previously: (a) a significant number of UK users; or (b) UK users form one of the target markets for the service (or the only market); or (c) there are reasonable grounds to believe that there is a material risk of significant harm to individuals in the UK presented by user-generated content or search content, as appropriate for the service. Whilst a targeting test is a reasonable way of capturing services provided to UK users from abroad, the third limb verges on ‘mere accessibility’. That suggests jurisdictional overreach. As to the first limb, the Bill says nothing about how ‘significant’ should be evaluated.” Mapping the Online Safety Bill