Wednesday, 12 April 2023

The Pocket Online Safety Bill

Assailed from all quarters for being not tough enough, for being too tough, for being fundamentally misconceived, for threatening freedom of expression, for technological illiteracy, for threatening privacy, for excessive Ministerial powers, or occasionally for the sin of not being some other Bill entirely – and yet enjoying almost universal cross-party Parliamentary support - the UK’s Online Safety Bill is now limping its way through the House of Lords. It starts its Committee stage on 19 April 2023.

This monster Bill runs to almost 250 pages. It is beyond reasonable hope that anyone coming to it fresh can readily assimilate all its ins and outs. Some features are explicable only with an understanding of its tortuous history, stretching back to the Internet Safety Strategy Green Paper in 2017 via the Online Harms White Paper of April 2019, the draft Bill of May 2021 and the changes following the Conservative leadership election last summer. The Bill has evolved significantly, shedding and adding features as it has been buffeted by gusting political winds, all the while (I would say) teetering on defectively designed foundations.

The first time that I blogged about this subject was in June 2018. Now, 29 blogposts, four evidence submissions and over 100,000 words later, is there anything left worth saying about the Bill? That rather depends on what the House of Lords does with it. Further government amendments are promised, never mind the possibility that some opposition or back-bench amendments may pass.

In the meantime, endeavouring to strike an optimal balance of historical perspective and current relevance, I have pasted together a thematically arranged collection of snippets from previous posts, plus a few tweets thrown in for good measure.

This exercise has the merit, at the price of some repetition, of highlighting long-standing issues with the Bill. I have omitted topics that made a brief walk-on appearance only to retreat into the wings (my personal favourite is the Person of Ordinary Sensibilities). Don’t expect to find every aspect of the Bill covered: you won’t find much on age-gating, despite (or perhaps because of) the dominant narrative that the Bill is about protecting children. My interest has been more in illuminating significant issues that have tended to be submerged beneath the slow motion stampede to do something about the internet.

In April 2019, after reading the White Paper, I said: “If the road to hell is paved with good intentions, this is a motorway.” That opinion has not changed.

Nor has this assessment, three years later in August 2022: "The Bill has the feel of a social architect’s dream house: an elaborately designed, exquisitely detailed (eventually), expensively constructed but ultimately uninhabitable showpiece; a showpiece, moreover, erected on an empty foundation: the notion that a legal duty of care can sensibly be extended beyond risk of physical injury to subjectively perceived speech harms.”

If you reckon to know the Bill, try my November 2022 quiz or take a crack at answering the twenty questions that I posed to the Secretary of State’s New Year Q&A (of which one question has been answered, by publication of a revised ECHR Memorandum). Otherwise, read on.

The Bill visualised
These six flowcharts illustrate the Bill’s core safety duties and powers as they stand now.

U2U Illegality Duties

Search Illegality Duties

U2U Children’s Duties


Search Children’s Duties


Proactive detection duties and powers (U2U and search)


News publishers, journalism and content of democratic importance:


In a more opinionated vein, take a tour of OnlineSafetyVille:



And finally, the contrast between individual speech governed by general law and the Bill’s scheme of discretionary regulation.



Big tech and the evil algorithm

State of Play: A continuing theme of the online harms debate has been the predominance of narratives, epitomised by the focus on Big Tech and the Evil Algorithm which has tended to obscure the broad scope of the legislation. On the figures estimated by the government's Impact Assessment, 80% of UK service providers in scope will be microbusinesses, employing between 1 and 9 people. A back bench amendment tabled in the Lords proposes to exempt SMEs from the Bill's duties.  

October 2018: “When governments talk about regulating online platforms to prevent harm it takes no great leap to realise that we, the users, are the harm that they have in mind.” A Lord Chamberlain for the internet? Thanks, but no thanks

April 2019: "Whilst framed as regulation of tech companies, the White Paper’s target is the activities and communications of online users. ‘Ofweb’ would regulate social media and internet users at one remove." Users Behaving Badly – the Online Harms White Paper

June 2021“it is easy to slip into using ‘platforms’ to describe those organisations in scope. We immediately think of Facebook, Twitter, YouTube, TikTok, Instagram and the rest. But it is not only about them: the government estimates that 24,000 companies and organisations will be in scope. That is everyone from the largest players to an MP’s discussion app, via Mumsnet and the local sports club discussion forum.” Carved out or carved up? The draft Online Safety Bill and the press

Feb 2022: “It might be argued that some activities (around algorithms, perhaps) are liable to create risks that, by analogy with offline, could justify imposing a preventative duty. That at least would frame the debate around familiar principles, even if the kind of harm involved remained beyond bounds.

Had the online harms debate been conducted in those terms, the logical conclusion would be that platforms that do not do anything to create relevant risks should be excluded from scope. But that is not how it has proceeded. True, much of the political rhetoric has focused on Big Tech and Evil Algorithm. But the draft Bill goes much further than that. It assumes that merely facilitating individual public speech by providing an online platform, however basic that might be, is an inherently risk-creating activity that justifies imposition of a duty of care. That proposition upends the basis on which speech is protected as a fundamental right.” Harm Version 4.0 - The Online Harms Bill in metamorphosis

March 2022“The U2U illegality safety duty is imposed on all in-scope user to user service providers (an estimated 20,000 micro-businesses, 4,000 small and medium businesses and 700 large businesses. Those also include 500 civil society organisations). It is not limited to high-profile social media platforms. It could include online gaming, low tech discussion forums and many others.” Mapping the Online Safety Bill

Nov 2022“ ’The Bill is all about Big Tech and large social media companies.’ No. Whilst the biggest “Category 1” services would be subject to additional obligations, the Bill’s core duties would apply to an estimated 25,000 UK service providers from the largest to the smallest, and whether or not they are run as businesses. That would include, for instance, discussion forums run by not-for-profits and charities. Distributed social media instances operated by volunteers also appear to be in scope.” How well do you know the Online Safety Bill?

Duties of care

State of Play The idea that platforms should be subject to a duty of care analogous to safety duties owed by occupiers of physical spaces took hold at an early stage of the debate, fuelling a long-running eponymous campaign by The Daily Telegraph. Unfortunately, the analogy was always a deeply flawed foundation on which to legislate for speech - something that has become more and more apparent as the government has grappled with the challenges of applying it to the online space. Perhaps recognising these difficulties, the government backed away from imposing a single overarching duty of care in favour of a series of more specific (but still highly abstract) duties. A recent backbench Lords amendment would restrict the Bill's general definition of 'harm' to physical harm, omitting psychological harm. 

October 2018"There is no duty on the occupier of a physical space to prevent visitors to the site making incorrect statements to each other." Take care with that social media duty of care

October 2018“The occupier of a park owes a duty to its visitors to take reasonable care to provide reasonably safe premises – safe in the sense of danger of personal injury or damage to property. It owes no duty to check what visitors are saying to each other while strolling in the grounds.” Take care with that social media duty of care

October 2018“[O]ffensive words are not akin to a knife in the ribs or a lump of concrete. The objectively ascertainable personal injury caused by an assault bears no relation to a human evaluating and reacting to what people say and write.” Take care with that social media duty of care

October 2018“[Rhodes v OPO] aptly illustrates the caution that has to be exercised in applying physical world concepts of harm, injury and safety to communication and speech, even before considering the further step of imposing a duty of care on a platform to take steps to reduce the risk of their occurrence as between third parties, or the yet further step of appointing a regulator to superintend the platform’s systems for doing so.” Take care with that social media duty of care

June 2019"[L]imits on duties of care exist for policy reasons that have been explored, debated and developed over many years. Those reasons have not evaporated in a puff of ones and zeros simply because we are discussing the internet and social media." Speech is not a tripping hazard

June 2019“A tweet is not a projecting nail to be hammered back into place, to the benefit of all who may be at risk of tripping over it. Removing a perceived speech risk for some people also removes benefits to others. Treating lawful speech as if it were a tripping hazard is wrong in principle and highly problematic in practice. It verges on equating speech with violence.” Speech is not a tripping hazard

June 2019: “The notion of a duty of care is as common in everyday parlance as it is misunderstood. In order to illustrate the extent to which the White Paper abandons the principles underpinning existing duties of care, and the serious problems to which that would inevitably give rise, this submission begins with a summary of the role and ambit of safety-related duties of care as they currently exist in law. …

The purely preventive, omission-based kind of duty of care in respect of third party conduct contemplated by the White Paper is exactly that which generally does not exist offline, even for physical injury. The ordinary duty is to avoid inflicting injury, not to prevent someone else from inflicting it.” Speech is not a tripping hazard

June 2020: "It is a fiction to suppose that the proposed online harms legislation would translate existing offline duties of care into an equivalent duty online. The government has taken an offline duty of care vehicle, stripped out its limiting controls and safety features, and now plans to set it loose in an environment – governance of individual speech - to which it is entirely unfitted." Online Harms Revisited

August 2022“The underlying problem with applying the duty of care concept to illegality is that illegality is a complex legal construct, not an objectively ascertainable fact like physical injury. Adjudging its existence (or risk of such) requires both factual information (often contextual) and interpretation of the law. There is a high risk that legal content will be removed, especially for real time filtering at scale. For this reason, it is strongly arguable that human rights compliance requires a high threshold to be set for content to be assessed as illegal.” Reimagining the Online Safety Bill

Systems and processes or Individual Items of Content?

State of Play An often repeated theme is that the Bill is (or should be) about design of systems and processes, not about content moderation. This is not easy to pin down in concrete terms. If the idea is that there are features of services that are intrinsically risky, regardless of the content involved, does that mean that (for instance) Ofcom should be able to recommend banning functionality such as (say) quote posting? Would a systems and processes approach suggest that nothing in the Bill should require a platform to make a judgement about the harmfulness or illegality of individual items of user content? 

On a different tack, the government argues that the Bill is indeed focused on systems and processes, and that service providers would not be sanctioned for individual content decisions. In the meantime, the Government's Impact Assessment estimates that the increased content moderation required by the Bill would cost around £1.9 billion over 10 years. Whatever the pros and cons of a systems and processes approach, the Bill is largely about content moderation. 

September 2020"The question for an intermediary subject to a legal duty of care will be: “are we obliged to consider taking steps (and if so what steps) in respect of these words, or this image, in this context?” If we are to gain an understanding of where the lines would be drawn, we cannot shelter behind comfortable abstractions. We have to grasp the nettle of concrete examples, however uncomfortable that may be." Submission to Ofcom Call for Evidence

November 2021: "Even a wholly systemic duty of care has, at some level and at some point – unless everything done pursuant to the duty is to apply indiscriminately to all kinds of content - to become focused on which kinds of user content are and are not considered to be harmful by reason of their informational content, and to what degree.

To take one example, Carnegie discusses repeat delivery of self-harm content due to personalisation systems. If repeat delivery per se constitutes the risky activity, then inhibition of that activity should be applied in the same way to all kinds of content. If repeat delivery is to be inhibited only, or differently, for particular kinds of content, then the duty additionally becomes focused on categories of content. There is no escape from this dichotomy." The draft Online Safety Bill: systemic or content-focused?

November 2021: “The decisions that service providers would have to make – whether automated, manual or a combination of both – when attempting to implement content-related safety duties, inevitably concern individual items of user content. The fact that those decisions may be taken at scale, or are the result of implementing systems and processes, does not change that.

For every item of user content putatively subject to a filtering, take-down or other kind of decision, the question for a service provider seeking to discharge its safety duties is always what (if anything) should be done with this item of content in this context? That is true regardless of whether those decisions are taken for one item of content, a thousand, or a million; and regardless of whether, when considering a service provider’s regulatory compliance, Ofcom is focused on evaluating the adequacy of its systems and processes rather than with punishing service providers for individual content decision failures.” The draft Online Safety Bill: systemic or content-focused?

November 2021“It is not immediately obvious why the government has set so much store by the claimed systemic nature of the safety duties. Perhaps it thinks that by seeking to distance Ofcom from individual content decisions it can avoid accusations of state censorship. If so, that ignores the fact that service providers, via their safety duties, are proxies for the regulator. The effect of the legislation on individual items of user content is no less concrete because service providers are required to make decisions under the supervision of Ofcom, rather than if Ofcom were wielding the blue pencil, the muffler or the content warning generator itself.” The draft Online Safety Bill: systemic or content-focused?

November 2021: "Notwithstanding its abstract framing, the impact of the draft Bill ... would be on individual items of content posted by users. But how can we evaluate that impact where legislation is calculatedly abstract, and before any of the detail is painted in? We have to concretise the draft Bill’s abstractions: test them against a hypothetical scenario and deduce (if we can) what might result." The draft Online Safety Bill concretised

November 2022“From a proportionality perspective, it has to be remembered that friction-increasing proposals typically strike at all kinds of content: illegal, harmful, legal and beneficial.” How well do you know the Online Safety Bill?

Platforms adjudging illegality
State of Play The Bill’s illegality duties are mapped out in the U2U and search engine diagrams in the opening section. The Bill imposes both reactive and proactive duties on providers. The proactive duties require platforms to take measures to prevent users encountering illegal content, encompassing the use of automated detection and removal systems. It a platform becomes aware of illegal content it must swiftly remove it.

In the present iteration of the Bill the platform (or its automated systems) must treat content as illegal if it has reasonable grounds to infer, on the basis of all information reasonably available to it, that the content is illegal. That is stipulated in Clause 170, which was introduced in July 2022 as New Clause 14. A backbench Lords amendment would raise the threshold to manifest illegality.  

June 2019: “In some kinds of case … illegality will be manifest. For most categories it will not be, for any number of reasons. The alleged illegality may be debatable as a matter of law. It may depend on context, including factual matters outside the knowledge of the intermediary. The relevant facts may be disputed. There may be available defences, including perhaps public interest. Illegality may depend on the intention or knowledge of one of the parties. And so it goes on. …

If there were to be any kind of positive duty to remove illegal material of which an intermediary becomes aware, it is unclear why that should go beyond material which is manifestly illegal on the face of it. If a duty were to go beyond that, consideration should be given to restricting it to specific offences that either impinge on personal safety (properly so called) or, for sound reasons, are regarded as sufficiently serious to warrant a separate positive duty which has the potential to contravene the presumption against prior restraint.” Speech is not a tripping hazard

February 2020"legality is rarely a question of inspecting an item of content alone without an understanding of the factual context. A court assesses evidence according to a standard of proof: balance of probabilities for civil liability, beyond reasonable doubt for criminal. Would the same process apply to the duty of care? Or would the mere potential for illegality trigger the ‘unlawfulness’ duty of care, with its accompanying obligation to remove user content? Over two years after the Internet Safety Green Paper, and the best part of a year after the White Paper, the consultation response contains no indication that the government recognises the existence of this issue, let alone has started to grapple with it." Online Harms Deconstructed - the Initial Consultation Response

February 2022: “It may seem obvious that illegal content should be removed, but that overlooks the fact that the draft Bill would require removal without any independent adjudication of illegality. That contradicts the presumption against prior restraint that forms a core part of traditional procedural protections for freedom of expression.

… The draft Bill provides that the illegality duty should be triggered by ‘reasonable grounds to believe’ that the content is illegal. It could have adopted a much higher threshold: manifestly illegal on the face of the content, for instance. The lower the threshold, the greater the likelihood of legitimate content being removed at scale, whether proactively or reactively.

The draft Bill raises serious (and already well-known, in the context of existing intermediary liability rules) concerns of likely over-removal through mandating platforms to detect, adjudge and remove illegal material on their systems. Those are exacerbated by adoption of the ‘reasonable grounds to believe’ threshold.” Harm Version 4.0 - The Online Harms Bill in metamorphosis

March 2o22: “The problem with the “reasonable grounds to believe” or similar threshold is that it expressly bakes in over-removal of lawful content. …

This illustrates the underlying dilemma that arises with imposing removal duties on platforms: set the duty threshold low and over-removal of legal content is mandated. Set the trigger threshold at actual illegality and platforms are thrust into the role of judge, but without the legitimacy or contextual information necessary to perform the role; and certainly without the capability to perform it at scale, proactively and in real time.” Mapping the Online Safety Bill

March 2022: “This analysis may suggest that for a proactive monitoring duty founded on illegality to be capable of compliance with the [ECHR] ‘prescribed by law’ requirement, it should be limited to offences the commission of which can be adjudged on the face of the user content without recourse to further information.

Further, proportionality considerations may lead to the perhaps stricter conclusion that the illegality must be manifest on the face of the content without requiring the platform to make any independent assessment of the content in order to find it unlawful. …

The [government’s ECHR] Memorandum does not address the arbitrariness identified above in relation to proactive illegality duties, stemming from an obligation to adjudge illegality in the legislated or inevitable practical absence of material facts. Such a vacuum cannot be filled by delegated powers, by an Ofcom code of practice, or by stipulating that the platform’s systems and processes must be proportionate.” Mapping the Online Safety Bill

May 2022: “For priority illegal content the Bill contemplates proactive monitoring, detection and removal technology operating in real time or near-real time. There is no obvious possibility for such technology to inform itself of extrinsic information about a post, such as might give rise to a defence of reasonable excuse, or which might shed light on the intention of the poster, or provide relevant external context.” Written evidence to Public Bill Committee

July 2022"... especially for real-time proactive filtering providers are placed in the position of having to make illegality decisions on the basis of a relative paucity of information, often using automated technology. That tends to lead to arbitrary decision-making. Moreover, if the threshold for determining illegality is set low, large scale over-removal of legal content will be baked into providers’ removal obligations. But if the threshold is set high enough to avoid over-removal, much actually illegal content may escape. Such are the perils of requiring online intermediaries to act as detective, judge and bailiff." Platforms adjudging illegality – the Online Safety Bill’s inference engine

July 2022 “In truth it is not so much NC14 itself that is deeply problematic, but the underlying assumption (which NC14 has now exposed) that service providers are necessarily in a position to determine illegality of user content, especially where real time automated filtering systems are concerned. …

It bears emphasising that these issues around an illegality duty should have been obvious once an illegality duty of care was in mind: by the time of the April 2019 White Paper, if not before. Yet only now are they being given serious consideration.” Platforms adjudging illegality – the Online Safety Bill’s inference engine

November 2022: “The current version of the Bill sets ‘reasonable grounds to infer’ as the platform’s threshold for adjudging illegality.

Moreover, unlike a court that comes to a decision after due consideration of all the available evidence on both sides, a platform will be required to make up its (or its algorithms') mind about illegality on the basis of whatever information is available to it, however incomplete that may be. For proactive monitoring of ‘priority offences’, that would be the user content processed by the platform’s automated filtering systems. The platform would also have to ignore the possibility of a defence unless they have reasonable grounds to infer that one may be successfully relied upon.

The mischief of a low threshold is that legitimate speech will inevitably be suppressed at scale under the banner of stamping out illegality.” How well do you know the Online Safety Bill?

January 2023"If anything graphically illustrates the perilous waters into which we venture when we require online intermediaries to pass judgment on the legality of user-generated content, it is the government’s decision to add S.24 of the Immigration Act 1971 to the Online Safety Bill’s list of “priority illegal content”: user content that platforms must detect and remove proactively, not just by reacting to notifications." Positive light or fog in the Channel?

January 2023“False positives are inevitable with any moderation system - all the more so if automated filtering systems are deployed and are required to act on incomplete information (albeit Ofcom is constrained to some extent by considerations of accuracy, effectiveness and lack of bias in its ability to recommend proactive technology in its Codes of Practice). Moreover, since the dividing line drawn by the Bill is not actual illegality but reasonable grounds to infer illegality, the Bill necessarily deems some false positives to be true positives.” Positive light or fog in the Channel?

January 2023: “These problems with the Bill’s illegality duties are not restricted to migrant boat videos or immigration offences… . They are of general application and are symptomatic of a flawed assumption at the heart of the Bill: that it is a simple matter to ascertain illegality just by looking at what the user has posted. There will be some offences for which this is possible (child abuse images being the most obvious), and other instances where the intent of the poster is clear. But for the most part that will not be the case, and the task required of platforms will inevitably descend into guesswork and arbitrariness: to the detriment of users and their right of freedom of expression.

It is strongly arguable that if an illegality duty is to be placed on platforms at all, the threshold for illegality assessment should not be ‘reasonable grounds to infer’, but clearly or manifestly illegal. Indeed, that may be what compatibility with the Article 10 right of freedom of expression requires.” Positive light or fog in the Channel?

Freedom of expression and Prior Restraint

State of Play: The debate on the effect of the Bill of freedom of expression is perhaps the most polarised of all: the government contending that the Bill sets out to secure freedom of expression in various ways, its critics maintaining that the Bill's duties on service providers will inevitably damage freedom of expression through suppression of legitimate user content. Placing stronger freedom of expression duties on platforms when carrying out their safety duties may be thought to highlight the Bill's deep internal contradictions.       

October 2018“We derive from the right of freedom of speech a set of principles that collide with the kind of actions that duties of care might require, such as monitoring and pre-emptive removal of content. The precautionary principle may have a place in preventing harm such as pollution, but when applied to speech it translates directly into prior restraint. The presumption against prior restraint refers not just to pre-publication censorship, but the principle that speech should stay available to the public until the merits of a complaint have been adjudicated by a legally competent independent tribunal. The fact that we are dealing with the internet does not negate the value of procedural protections for speech.” A Lord Chamberlain for the internet? Thanks, but no thanks

October 2018"US district judge Dalzell said in 1996: “As the most participatory form of mass speech yet developed, the internet deserves the highest protection from governmental intrusion”. The opposite view now seems to be gaining ground: that we individuals are not to be trusted with the power of public speech, it was a mistake ever to allow anyone to speak or write online without the moderating influence of an editor, and by hook or by crook the internet genie must be stuffed back in its bottle." A Lord Chamberlain for the internet? Thanks, but no thanks

June 2019“If it be said that mere facilitation of users’ individual public speech is sufficient to justify control via a preventive duty of care placed on intermediaries, that proposition should be squarely confronted. It would be tantamount to asserting that individual speech is to be regarded by default as a harm to be mitigated, rather than as the fundamental right of human beings in a free society. As such the proposition would represent an existential challenge to the right of individual freedom of speech.” Speech is not a tripping hazard

June 2019“The duty of care would…, since the emphasis is on prevention rather than action after the event, create an inherent conflict with the presumption against prior restraint, a long standing principle designed to provide procedural protection for freedom of expression.” Speech is not a tripping hazard

Feb 2020"People like to say that freedom of speech is not freedom of reach, but that is just a slogan. If the state interferes with the means by which speech is disseminated or amplified, it engages the right of freedom of expression. Confiscating a speaker’s megaphone at a political rally is an obvious example. ... Seizing a printing press is not exempted from interference because the publisher has the alternative of handwriting. Freedom of speech is not just freedom to whisper." Online Harms IFAQ

Feb 2020: “… increasingly the coercive powers of the state are regarded as the means of securing freedom of expression rather than as a threat to it. So Carnegie questions whether removing a retweet facility is really a violation of users' rights to formulate their own opinion and express their views, or rather - to the contrary - a mechanism to support those rights by slowing them down so that they can better appreciate content, especially as regards onward sharing.

The danger with conceptualising fundamental rights as a collection of virtuous swords jostling for position in the state’s armoury is that we lose focus on their core role as a set of shields creating a defensive line against the excesses and abuse of state power.” Online Harms IFAQ

June 2020"The French Constitutional Council decision is a salutary reminder that fundamental rights issues are not the sole preserve of free speech purists, nor mere legal pedantry to be brushed aside in the eagerness to do something about the internet and social media." Online Harms and the Legality Principle

June 2020: “10 things that Article 19 of the Universal Declaration of Human Rights doesn’t say” (Twitter thread – now 18 things.) Sample:

“6. Everyone has the right to seek, receive and impart information and ideas through any media, always excepting the internet and social media.”

May 2021: "… the danger inherent in the legislation: that efforts to comply with the duties imposed by the legislation would carry a risk of collateral damage by over-removal. That is true not only of ‘legal but harmful’ duties, but also of the moderation and filtering duties in relation to illegal content that would be imposed on all providers.

No obligation to conduct a freedom of expression risk assessment could remove the risk of collateral damage by over-removal. That smacks of faith in the existence of a tech magic wand. Moreover, it does not reflect the uncertainty and subjective judgement inherent in evaluating user content, however great the resources thrown at it.

Internal conflicts between duties... sit at the heart of the draft Bill. For that reason, despite the government’s protestations to the contrary, the draft Bill will inevitably continue to attract criticism as ... a censor’s charter." Harm Version 3.0: the draft Online Safety Bill

June 2021"Beneath the surface of the draft Bill lurks a foundational challenge. Its underlying premise is that speech is potentially dangerous, and those that facilitate it must take precautionary steps to mitigate the danger. That is the antithesis of the traditional principle that, within boundaries set by clear and precise laws, we are free to speak as we wish. The mainstream press may comfort themselves that this novel approach to speech is (for the moment) being applied only to the evil internet and to the unedited individual speech of social media users; but it is an unwelcome concept to see take root if you have spent centuries arguing that freedom of expression is not a fundamental risk, but a fundamental right." Carved out or carved up? The draft Online Safety Bill and the press

June 2021: “[D]iscussions of freedom of expression tend to resemble convoys of ships passing in the night. If, by the right of freedom of expression, Alice means that she should be able to speak without fear of being visited with state coercion; Bob means a space in which the state guarantees, by threat of coercion to the owner of the space, that he can speak; Carol contends that in such a space she cannot enjoy a fully realised right of freedom of expression unless the state forcibly excludes Dan’s repugnant views; and Ted says that irrespective of the state, Alice and Bob and Carol and Dan all directly engage each other’s fundamental right of freedom of expression when they speak to each other; then not only will there be little commonality of approach amongst the four, but the fact that they are talking about fundamentally different kinds of rights is liable to be buried beneath the single term, freedom of expression.

If Grace adds that since we should not tolerate those who are intolerant of others’ views the state should – under the banner of upholding freedom of expression – act against intolerant speech, the circle of confusion is complete.” Speech vs. Speech

November 2021: “A systemic [safety] duty would relate to systems and processes that for whatever reason are to be treated as intrinsically risky.

The question that then arises is what activities are to be regarded as inherently risky. It is one thing to argue that, for instance, some algorithmic systems may create risks of various kinds. It is quite another to suggest that that is true of any kind of U2U platform, even a simple discussion forum. If the underlying assumption of a systemic duty of care is that providing a facility in which individuals can speak to the world is an inherently risky activity, that (it might be thought) upends the presumption in favour of speech embodied in the fundamental right of freedom of expression.” The draft Online Safety Bill: systemic or content-focused?

March 2022"It may seem like overwrought hyperbole to suggest that the Bill lays waste to several hundred years of fundamental procedural protections for speech. But consider that the presumption against prior restraint appeared in Blackstone’s Commentaries (1769). It endures today in human rights law. That presumption is overturned by legal duties that require proactive monitoring and removal before an independent tribunal has made any determination of illegality. It is not an answer to say, as the government is inclined to do, that the duties imposed on providers are about systems and processes rather than individual items of content. For the user whose tweet or post is removed, flagged, labelled, throttled, capped or otherwise interfered with as a result of a duty imposed by this legislation, it is only ever about individual items of content." Mapping the Online Safety Bill

March 2023"In a few months’ time three years will have passed since the French Constitutional Council struck down the core provisions of the Loi Avia ... the decision makes uncomfortable reading for some core aspects of the Online Safety Bill." Five lessons from the Loi Avia

Rule of law

State of play Once the decision was made to enact a framework designed to give flexibility to a regulator (Ofcom), rule of law concerns around certainty and foreseeability of content rules and decisions were bound to come to the fore. These issues are part and parcel of the government's decided policy approach.

March 2019“Close scrutiny of any proposed social media duty of care from a rule of law perspective can help ensure that we make good law for bad people rather than bad law for good people." A Ten Point Rule of Law Test for a Social Media Duty of Care

June 2019“The White Paper, although framed as regulation of platforms, concerns individual speech. The platforms would act as the co-opted proxies of the state in regulating the speech of users. Certainty is a particular concern with a law that has consequences for individuals' speech. In the context of an online duty of care the rule of law requires that users must be able to know with reasonable certainty in advance what speech is liable to be the subject of preventive or mitigating action by a platform operator operating under the duty of care.” Speech is not a tripping hazard

May 2020"If you can't articulate a clear and certain rule about speech, you don't get to make a rule at all." Disinformation and Online Harms

June 2020“The proposed Online Harms legislation falls squarely within [the legality] principle, since internet users are liable to have their posts, tweets, online reviews and every other kind of public or semi-public communication interfered with by the platform to which they are posting, as a result of the duty of care to which the platform would be subject. Users, under the principle of legality, must be able to able to foresee, with reasonable certainty, whether the intermediary would be legally obliged to interfere with what they are about to say online.” Online Harms and the Legality Principle

September 2020“If we are to gain an understanding of where the lines would be drawn, we cannot shelter behind comfortable abstractions. We have to grasp the nettle of concrete examples, however uncomfortable that may be. That is important from the perspective not only of the intermediary, but of the user. From a rule of law standpoint, it is imperative that the user should be able to predict, in advance, with reasonable certainty, whether what they wish to say is likely to be affected by the actions of an intermediary seeking to discharge its duty of care.” Submission to Ofcom Call for Evidence

September 2020“…the purpose of these examples is less about what the answer is in any given case (although that is of course important in terms of whether the line is being drawn in the right place), but more about whether we are able to predict the answer in advance. If a legal framework does not enable us to predict clearly, in advance, what the answer is in each case, then there is no line and the framework falls at the first rule of law hurdle of “prescribed by law”. It is not sufficient to make ad hoc pronouncements about what the answer is in each case, or to invoke high level principles. We have to know why the answer is what it is, expressed in terms that enable us to predict with confidence the answer in other concrete cases.” Submission to Ofcom Call for Evidence

August 2022“The principled way to address speech considered to be beyond the pale is for Parliament to make clear, certain, objective rules about it – whether that be a criminal offence, civil liability on the user, or a self-standing rule that a platform is required to apply. Drawing a clear line, however, requires Parliament to give careful consideration not only to what should be caught by the rule, but to what kind of speech should not be caught, even if it may not be fit for a vicar’s tea party. Otherwise it draws no line, is not a rule and fails the rule of law test: that legislation should be drawn so as to enable anyone to foresee, with reasonable certainty, the consequences of their proposed action.” Reimagining the Online Safety Bill

Regulation by regulator

State of Play A regulatory model akin to broadcast-style regulation by regulator has been part of the government's settled approach from the start. Changing that would require a rethink of the Bill. 

June 2018: “The choice is not between regulating or not regulating. If there is a binary choice (and there are often many shades in between) it is between settled laws of general application and fluctuating rules devised and applied by administrative agencies or regulatory bodies; it is between laws that expose particular activities, such as search or hosting, to greater or less liability; or laws that visit them with more or less onerous obligations; it is between regimes that pay more or less regard to fundamental rights; and it is between prioritising perpetrators or intermediaries.

Such niceties can be trampled underfoot in the rush to do something about the internet. Existing generally applicable laws are readily overlooked amid the clamour to tame the internet Wild West, purge illegal, harmful and unacceptable content, leave no safe spaces for malefactors and bring order to the lawless internet. … We would at our peril confer the title and powers of Governor of the Internet on a politician, civil servant, government agency or regulator.” Regulating the internet – intermediaries to perpetrators

October 2018"[W]hen regulation by regulator trespasses into the territory of speech it takes on a different cast. Discretion, flexibility and nimbleness are vices, not virtues, where rules governing speech are concerned. The rule of law demands that a law governing speech be general in the sense that it applies to all, but precise about what it prohibits. Regulation by regulator is the converse: targeted at a specific group, but laying down only broadly stated goals that the regulator should seek to achieve. A Lord Chamberlain for the internet? Thanks, but no thanks

October 2018: "It is hard not to think that an internet regulator would be a politically expedient means of avoiding hard questions about how the law should apply to people’s behaviour on the internet. Shifting the problem on to the desk of an Ofnet might look like a convenient solution. It would certainly enable a government to proclaim to the electorate that it had done something about the internet. But that would cast aside many years of principled recognition that individual speech should be governed by the rule of law, not the hand of a regulator.

If we want safety, we should look to the general law to keep us safe. Safe from the unlawful things that people do offline and online. And safe from a Lord Chamberlain of the Internet." A Lord Chamberlain for the internet? Thanks, but no thanks

March 2019"...the regulator is not an alchemist. It may be able to produce ad hoc and subjective applications of vague precepts, and even to frame them as rules, but the moving hand of the regulator cannot transmute base metal into gold. Its very raison d'etre is flexibility, discretionary power and nimbleness. Those are a vice, not a virtue, where the rule of law is concerned, particularly when freedom of individual speech is at stake.” A Ten Point Rule of Law Test for a Social Media Duty of Care

May 2019“Individual speech is different. What is a permissible regulatory model for broadcast is not necessarily justifiable for individuals, as was recognised in the US Communications Decency Act case (Reno v ACLU) in the early 1990s. … In these times it is hardly fashionable, outside the USA, to cite First Amendment jurisprudence. Nevertheless, the proposition that individual speech is not broadcast should carry weight in a constitutional or human rights court in any jurisdiction.” The Rule of Law and the Online Harms White Paper

June 2019“A Facebook, Twitter or Mumsnet user is not an invited audience member on a daytime TV show, but someone exercising their freedom to speak to the world within clearly defined boundaries set by the law. A policy initiative to address behaviour online should take that principle as its starting point and respect and work within it. The White Paper does not do so. It cannot be assumed that an acceptable mode of regulation for broadcast is appropriate for individual speech. The norm in the offline world is that individual speech should be governed by general laws, not by a discretionary regulator.” Speech is not a tripping hazard

February 2020: “Consider the days when unregulated theatres were reckoned to be a danger to society and the Lord Chamberlain censored plays. That power was abolished in 1968, to great rejoicing. The theatres were liberated. They could be as rude and controversial as they liked, short of provoking a breach of the peace.

The White Paper proposes a Lord Chamberlain for the internet. Granted, it would be an independent regulator, similar to Ofcom, not a royal official. It might even be Ofcom itself. But the essence is the same. And this time the target would not be a handful of playwrights out to shock and offend, but all of us who use the internet.” Online Harms IFAQ

June 2020“Broadcast-style regulation is the exception, not the norm. In domestic UK legislation it has never been thought appropriate, either offline or online, to subject individual speech to the control of a broadcast-style discretionary regulator. That is true for the internet as in any other medium.” Online Harms Revisited

Analogy wars

October 2018"Setting regulatory standards for content means imposing more restrictive rules than the general law. That is the regulator’s raison d’etre. But the notion that a stricter standard is a higher standard is problematic when applied to what we say. Consider the frequency with which environmental metaphors – toxic speech, polluted discourse – are now applied to online speech. For an environmental regulator, cleaner may well be better. The same is not true of speech." A Lord Chamberlain for the internet? Thanks, but no thanks

October 2018: “[N]o analogy is perfect. Although some overlap exists with the safety-related dangers (personal injury and damage to property) that form the subject matter of occupiers’ liability to visitors and of corresponding common law duties of care, many online harms are of other kinds. Moreover, it is significant that the duty of care would consist in preventing behaviour of one site visitor to another.

The analogy with public physical places suggests that caution is required in postulating duties of care that differ markedly from those, both statutory and common law, that arise from the offline occupier-visitor relationship.” Take care with that social media duty of care

May 2021“Welcome to the Online Regulation Analogy Collection: speech as everything that it isn't (and certainly not as the freedom that underpins all other freedoms)” (Twitter thread)

What’s illegal offline is illegal online

State of Play Amongst all the narratives that have infused the Online Harms debate, the mantra of online-offline equivalence has been one of the longest-running.    

February 2022: "Overall, the government has pursued its quest for online safety under the Duty of Care banner, bolstered with the slogan “What Is Illegal Offline Is Illegal Online”.

That slogan, to be blunt, has no relevance to the draft Bill. Thirty years ago there may have been laws that referred to paper, post, or in some other way excluded electronic communication and online activity. Those gaps were plugged long ago. With the exception of election material imprints (a gap that is being fixed by a different Bill currently going through Parliament), there are no criminal offences that do not already apply online (other than jokey examples like driving a car without a licence).

On the contrary, the draft Bill’s Duty of Care would create novel obligations for both illegal and legal content that have no comparable counterpart offline. The arguments for these duties rest in reality on the premise that the internet and social media are different from offline, not that we are trying to achieve offline-online equivalence. " Harm Version 4.0 - The Online Harms Bill in metamorphosis

December 2022: “DCMS’s social media infographics once more proclaim that ‘What is illegal offline is illegal online’.

The underlying message of the slogan is that the Bill brings online and offline legality into alignment. Would that also mean that what is legal offline is (or should be) legal online? The newest Culture Secretary Michelle Donelan appeared to endorse that when announcing the abandonment of ‘legal but harmful to adults’: "However admirable the goal, I do not believe that it is morally right to censor speech online that is legal to say in person."

Commendable sentiments, but does the Bill live up to them? Or does it go further and make illegal online some of what is legal offline? I suggest that in several respects it does do that." (Some of) what is legal offline is illegal online

End-to-End Encryption

State of Play The issue of end to end encryption, and the allied Ofcom power to require messaging platforms to deploy CSEA scantech, has been a slow burner. It will feature in Lords amendments.  

June 2019“What would prevent the regulator from requiring an in-scope private messaging service to remove end-to-end encryption? This is a highly sensitive topic which was the subject of considerable Parliamentary debate during the passage of the Investigatory Powers Bill. It is unsuited to be delegated to the discretion of a regulator.” Speech is not a tripping hazard

May 2020"This is the first indication that the government is alive to the possibility that a regulator might be able to interpret a duty of care so as to affect the ability of an intermediary to use end to end encryption." A Tale of Two Committees

November 2022“Ofcom will be given the power to issue a notice requiring a private messaging service to use accredited technology to scan for CSEA material. A recent government amendment to the Bill provides that a provider given such a notice has to make such changes to the design or operation of the service as are necessary for the technology to be used effectively. That opens the way to requiring E2E encryption to be modified if it is incompatible with the accredited technology - which might, for instance, involve client-side scanning. Ofcom can also require providers to use best endeavours develop or source their own scanning technology.” How well do you know the Online Safety Bill?

New offences

State of Play The Bill introduces several new offences that could be committed by users. The proposal to enact a new harmful communications offence was dropped after well-founded criticism, but leaving the notorious S.127(1) Communications Act in place. The government is expected to introduce more offences.  

A backbench Lords amendment seeks to add the new false and threatening communications offences to the list of priority illegal content that platforms would have to proactively seek out and remove.

March 2022: “The threatening communications offence ought to be uncontroversial. However, the Bill adopts different wording from the Law Commission’s recommendation. That focused on threatening a particular victim (the ‘object of the threat’, in the Law Commission’s language). The Bill’s formulation may broaden the offence to include something more akin to use of threatening language that might be encountered by anyone who, upon reading the message, could fear that the threat would be carried out (whether or not against them).

It is unclear whether this is an accident of drafting or intentional widening. The Law Commission emphasised that the offence should encompass only genuine threats: “In our view, requiring that the defendant intend or be reckless as to whether the victim of the threat would fear that the defendant would carry out the threat will ensure that only “genuine” threats will be within the scope of the offence.” (emphasis added) It was on this basis that the Law Commission considered that another Twitter Joke Trial scenario would not be a concern.” Mapping the Online Safety Bill

February 2023: “Why has the government used different language from the Law Commission's recommendation for the threatening communications offence? The concern is that the government’s rewording broadens the offence beyond the genuine threats that the Law Commission intended should be captured. The spectre of the Twitter Joke Trial hovers in the wings.” (Twitter thread)

Extraterritoriality

State of Play The territorial reach of the Bill has attracted relatively little attention. As a matter of principle territorial overreach is to be deprecated, not least because it encourages similar lack of jurisdictional self-restraint on the part of other countries.     

December 2020“For the first time, the Final Response has set out the proposed territorial reach of the proposed legislation. Somewhat surprisingly, it appears to propose that services should be subject to UK law on a ‘mere availability of content’ basis. Given the default cross-border nature of the internet, this is tantamount to legislating extraterritorially for the whole world. It would follow that any provider anywhere in the rest of the world would have to geo-fence its service to exclude the UK in order to avoid engaging UK law. Legislating on a mere availability basis has been the subject of criticism over many years since the advent of the internet.” The Online Harms edifice takes shape

March 2022: “The Bill maintains the previous enthusiasm of the draft Bill to legislate for the whole world.

The safety duties adopt substantially the same expansive definition of ‘UK-linked’ as previously: (a) a significant number of UK users; or (b) UK users form one of the target markets for the service (or the only market); or (c) there are reasonable grounds to believe that there is a material risk of significant harm to individuals in the UK presented by user-generated content or search content, as appropriate for the service. Whilst a targeting test is a reasonable way of capturing services provided to UK users from abroad, the third limb verges on ‘mere accessibility’. That suggests jurisdictional overreach. As to the first limb, the Bill says nothing about how ‘significant’ should be evaluated.” Mapping the Online Safety Bill