tag:blogger.com,1999:blog-2297213676717799222024-03-08T20:00:39.027+00:00CyberleagleGraham Smith's blog on law, IT, the internet and online mediaCyberleaglehttp://www.blogger.com/profile/17507190182464072147noreply@blogger.comBlogger174125tag:blogger.com,1999:blog-229721367671779922.post-26205892721843805042023-06-17T13:10:00.001+01:002023-06-17T13:10:07.486+01:00Shifting paradigms in platform regulation<span style="font-family: georgia;">[Based on a keynote address to the conference on <a href="https://www.eventbrite.co.uk/e/contemporary-social-and-legal-issues-in-a-social-media-age-tickets-580483189847" target="_blank"><i>Contemporary Social and Legal Issues in a Social Media Age</i> </a>held at Keele University on 14 June 2023.] <br /><br />First, an apology for the title. Not for the rather sententious ‘shifting paradigms’ – this is, after all, an academic conference – but ‘platform regulation’. If ever there was a cliché that cloaks assumptions and fosters ambiguity, ‘platform regulation’ is it.<br /><br />Why is that? For three reasons. <br /><br /><b>First</b>, it conceals the target of regulation. In the context with which we are concerned users – not platforms – are the primary target. In the Online Safety Bill model, platforms are not the end. They are merely the means by which the state seeks to control – regulate, if you like - the speech of end-users. <br /><br /><b>Second</b>, because of the ambiguity inherent in the word regulation. In its broad sense it embraces everything from the general law of the land that governs – regulates, if you like – our speech to discretionary, broadcast-style, regulation by regulator: the Ofcom model. If we think – and I suspect many don’t - that the difference matters, then to have them all swept up together under the banner of regulation is unhelpful. <br /><br /><b>Third</b>, because it opens the door to the kind of sloganising with which we have become all too familiar over the course of the Online Harms debate: the unregulated Internet; the Wild West Web; ungoverned online spaces. <br /><br />What do they mean by this?<br /></span><div><ul style="text-align: left;"><li><span style="font-family: georgia;">Do they mean that there is no law online? <i><a href="https://twitter.com/cyberleagle/status/1209466608126746624" target="_blank">Internet Law and Regulation</a></i> has 750,000 words that suggest otherwise. </span></li><li><span style="font-family: georgia;">Do they mean that there is law but it is not enforced? Perhaps they should talk to the police, or look at new ways of providing access to justice. </span></li><li><span style="font-family: georgia;">Do they mean that there is no Ofcom online? That is true – for the moment - but the idea that individual speech should be subject to broadcast-style regulation rather than the general law is hardly a given. Broadcast regulation of speech is the exception, not the norm.</span></li><li><span style="font-family: georgia;">Do they mean that speech laws should be stricter online than offline? That is a proposition to which no doubt some will subscribe, but how does that square with the notion of equivalence implicit in the other studiously repeated mantra: that what is illegal offline should be illegal online? </span></li></ul><span style="font-family: georgia;">The sloganising perhaps reached its nadir when the Joint Parliamentary Committee scrutinising the draft Online Safety Bill decided to publish its Report under the strapline: ‘<a href="https://committees.parliament.uk/committee/534/draft-online-safety-bill-joint-committee/news/159784/no-longer-the-land-of-the-lawless-joint-committee-reports/" target="_blank">No Longer the Land of the Lawless</a>’ - 100% headline-grabbing clickbait – adding, for good measure: “A landmark report which will make the tech giants abide by UK law”. <br /><br />Even if the Bill were about tech giants and their algorithms – and according to the government’s <a href="https://www.gov.uk/government/publications/online-safety-bill-supporting-documents/overview-of-expected-impact-of-changes-to-the-online-safety-bill" target="_blank">Impact Assessment</a> 80% of in-scope UK service providers will be micro-businesses – at its core the Bill seeks not to make tech giants abide by UK law, but to press platforms into the role of detective, judge and bailiff: to require them to pass judgment on whether we – the users - are abiding by UK law. That is quite different. <br /><br />What are the shifting paradigms to which I have alluded? <br /><br />First the shift <b>from Liability to Responsibility </b><br /><br />Go back twenty-five years and the debate was all about liability of online intermediaries for the unlawful acts of their users. If a user’s post broke the law, should the intermediary also be liable and if so in what circumstances? The analogies were with phone companies and bookshops or magazine distributors, with primary and secondary publishers in defamation, with primary and secondary infringement in copyright, and similar distinctions drawn in other areas of the law. <br /><br />In Europe the main outcome of this debate was the E-Commerce Directive, passed at the turn of the century and implemented in the UK in 2002. It laid down the well-known categories of conduit, caching and hosting. Most relevantly to platforms, for hosting it provided a <a href="https://www.cyberleagle.com/2018/04/the-electronic-commerce-directive.htmlhttps://www.cyberleagle.com/2018/04/the-electronic-commerce-directive.html">liability shield based on lack of knowledge of illegality</a>. Only if you gained knowledge that an item of content was unlawful, and then failed to remove that content expeditiously, could you be exposed to liability for it. This was closely based on the bookshop and distributor model. <br /><br />The hosting liability regime was – and is – similar to the notice and takedown model of the US Digital Millennium Copyright Act – and significantly different from the US S.230 Communications Decency Act 1996, which was more closely akin to full conduit immunity. <br /><br />The E-Commerce Directive’s knowledge-based hosting shield incentivises – but does not require – a platform to remove user content on gaining knowledge of illegality. It exposes the platform to risk of liability under the relevant underlying law. That is all it does. Liability does not automatically follow. <br /><br />Of course the premise underlying all of these regimes is that the user has broken some underlying substantive law. If the user hasn’t broken the law, there is nothing that the platform could be liable for. <br /><br />It is pertinent to ask – for whose benefit were these liability shields put in place? There is a tendency to frame them as a temporary inducement to grow the then nascent internet industry. Even if there was an element of that, the deeper reason was to protect the legitimate speech of users. The greater the liability burden on platforms, the greater their incentive to err on the side of removing content, the greater the risk to legitimate speech and the greater the intrusion on the fundamental speech rights of users. The distributor liability model adopted in Europe, and the S.230 conduit model in the USA, were for the protection of users as much, if not more so, than for the benefit of platforms. <br /><br />The Shift to Responsibility has taken two forms. <br /><br />First, the increasing volume of the ‘<b>publishers not platforms</b>’ narrative. The view is that platforms are curating and recommending user content and so should not have the benefit of the liability shields. As often and as loudly as this is repeated, it has gained little legislative traction. Under the Online Safety Bill the liability shields remain untouched. In the EU Digital Services Act the shields are refined and tweaked, but the fundamentals remain the same. If, incidentally, we think back to the bookshop analogy it was never the case that a bookshop would lose its liability shield if it promoted selected books in its window, or decided to stock only left wing literature. <br /><br />Second, and more significantly, has come a shift towards imposing <b>positive obligations</b> on platforms. Rather than just being exposed to risk of liability for failing to take down users’ illegal content, a platform would be required to do so on pain of a fine or a regulatory sanction. Most significant is when the obligation takes the form of a proactive obligation: rather than awaiting notification of illegal user content, the platform must take positive steps proactively to seek out, detect and remove illegal content. <br /><br />This has gained traction in the UK Online Safety Bill, but not in the EU Digital Services Act. There is in fact 180<sup>0</sup> divergence between the UK and the EU on this topic. The DSA repeats and re-enacts the principle first set out in <a href="https://www.cyberleagle.com/2017/05/time-to-speak-up-for-article-15.html" target="_blank">Article 15 of the eCommerce Directive</a>: the EU prohibition on Member States imposing general monitoring obligations on conduits, caches and hosts. Although the DSA imposes some positive diligence obligations on very large operators, those still cannot amount to a general monitoring obligation. <br /><br />The UK, on the other hand, has abandoned its original post-Brexit commitment to abide by Article 15, and – under the banner of a duty of care - has gone all out to impose proactive, preventative detection and removal duties on platforms – for public forums and also including powers for Ofcom to require private messaging services to scan for CSEA content. <br /><br />Proactive obligations of this kind raise serious questions about a state’s compliance with human rights law, due to the high risk that in their efforts to determine whether user content is legal or illegal, platforms will end up taking down users’ legitimate speech at scale. Such legal duties on platforms are subject to especially strict scrutiny, since they amount to a version of prior restraint: removal before full adjudication on the merits, or – in the case of upload filtering – before publication. <br /><br />The most commonly cited reason for these concerns is that platforms will err on the side of caution when faced with the possibility of swingeing regulatory sanctions. However, there is more to it than that: the Online Safety Bill requires platforms to make illegality judgements on the basis of all information reasonably available to them. But <a href="https://www.cyberleagle.com/2023/05/knowing-unknowable-musings-of-ai.html" target="_blank">an automated system operating in real time will have precious little information available to it</a> – hardly more than the content of the posts. Arbitrary decisions are inevitable. <br /><br />Add that the Bill requires the platform to treat user content as illegal if it has no more than “<a href="https://www.cyberleagle.com/2022/07/platforms-adjudging-illegality-online.html" target="_blank">reasonable grounds to infer</a>” illegality, and we have baked-in over-removal at scale: a classic basis for incompatibility with fundamental freedom of speech rights; and the reason why in 2020 the <a href="https://www.cyberleagle.com/2023/03/five-lessons-from-loi-avia.html" target="_blank">French Constitutional Council held the Loi Avia unconstitutional</a>. <br /><br />The risk of incompatibility with fundamental rights is in fact twofold – first, built-in <b>arbitrariness</b> breaches the ‘prescribed by law’ or ‘legality’ requirement: that the user should be able to foresee, with reasonable certainty, whether what they are about to post is liable to be affected by the platform’s performance of its duty; and second, built-in <b>over-removal</b> raises the spectre of disproportionate interference with the right of freedom of expression. <br /><br /><b>From Illegality to Harm </b><br /><br />For so long as the platform regulation debate centred around liability, it also had to be about illegality: if the user’s post was not illegal, there was nothing to bite on - nothing for which the intermediary could be held liable. <br /><br />But once the notion of responsibility took hold, that constraint fell away. If a platform could be placed under a preventative duty of care, that could be expanded beyond illegality. That is what happened in the UK. The Carnegie UK Trust argued that platforms ought to be treated <a href="https://www.cyberleagle.com/2018/10/take-care-with-that-social-media-duty.html" target="_blank">analogously to occupiers of physical spaces</a> and owe a duty of care to their visitors, but extended to encompass types of harm beyond physical injury. <br /><br />The fundamental problem with this approach is that <a href="https://www.cyberleagle.com/2019/06/speech-is-not-tripping-hazard-response.html" target="_blank">speech is not a tripping hazard</a>. Speech is not a projecting nail, or an unguarded circular saw, that will foreseeably cause injury – with no possibility of benefit – if someone trips over it. Speech is nuanced, subjectively perceived and capable of being reacted to in as many different ways as there are people. A duty of care is workable for risk of objectively ascertainable physical injury but not for subjectively perceived and contested harms, let alone more nebulously conceived harms to society. The Carnegie approach also glossed over the distinction between a duty to avoid causing injury and a duty to prevent others from injuring each other (imposed only exceptionally in the offline world). <br /><br />In order to discharge such a duty of care the platform would have to balance the interests of the person who claims to be traumatised by reading something to which they deeply object, against the interests of the speaker, and against the interests of other readers who may have a completely different view of the merits of the content. <br /><br />That is not a duty that platforms are equipped, or could ever have the legitimacy, to undertake; and if the balancing task is entrusted to a regulator such as Ofcom, that is tantamount to asking Ofcom to write a parallel statute book for online speech – something which many would say should be for Parliament alone. <br /><br />The misconceived duty of care analogy has bedevilled the Online Harms debate and the Bill from the outset. It is why the government got into such a mess with ‘legal but harmful for adults’ – now dropped from the Bill. <br /><br />The problems with subjectively perceived harm are also why the government ended up abandoning its proposed replacement for S.127(1) of the Communications Act 2003: the harmful communications offence. <br /><br /><b>From general law to discretionary regulation </b><br /><br />I started by highlighting the difference between individual speech governed by the general law and regulation by regulator. We can <a href="https://www.cyberleagle.com/2012/01/regulatory-convergence-same-old.html" target="_blank">go back to the 1990s</a> and find proposals to apply broadcast-style discretionary content regulation to the internet. The pushback was equally strong. Broadcast-style regulation was the exception, not the norm. It was borne of spectrum scarcity and had no place in governing individual speech. <br /><br /><i>ACLU v Reno</i> (the US Communications Decency Act case) applied a <a href="https://www.cyberleagle.com/2019/05/the-rule-of-law-and-online-harms-white.html" target="_blank">medium-specific analysis</a> to the internet and placed individual speech – analogised to old-style pamphleteers – at the top of the hierarchy, deserving of greater protection from government intervention than cable or broadcast TV. <br /><br />In the UK the key battle was fought during the passing of the Communications Act 2003, when the internet was deliberately excluded from the content remit of Ofcom. That decision may have been based more on practicality than principle, but it set the ground rules for the next 20 years. <br /><br />It is instructive to hear peers with broadcast backgrounds saying what a mistake it was to exclude the internet from Ofcom’s content remit in 2003 - as if broadcast is the offline norm and as if Ofcom makes the rules about what we say to each other in the street. <br /><br />I would suggest that the mistake is being made now – both by introducing regulation by regulator and in consigning individual speech to the bottom of the heap. <br /><br /><b>From right to risk </b><br /><br />The notion has gained ground that individual speech is a fundamental risk, not a fundamental right: that we are not to be trusted with the power of public speech, it was a mistake ever to allow anyone to speak or write online without the moderating influence of an editor, and by hook or by crook the internet genie must be stuffed back in its bottle. <br /><br /><b>Other shifts </b><br /><br />We can detect other shifts. The blossoming narrative that if someone does something outrageous online, the fault is more with the platform than with the perpetrator. The notion that platforms have a greater responsibility than parents for the online activities of children. The relatively recent shift towards treating large platforms as akin to public utilities on which obligations not to remove some kinds of user content can legitimately be imposed. We see this chiefly in the Online Safety Bill’s obligations on Category 1 platforms in respect of content of democratic importance, news publisher and journalistic content. <br /><br /><b>From Global to Local </b><br /><br />I want to finish with something a little different: the shift from Global to Local. Nowadays we tend to have a good laugh at the naivety of the 1990s cyberlibertarians who thought that the bits and bytes would fly across borders and there was not a thing that any nation state could do about it. <br /><br />Well, the nation states had other ideas, starting with China and its Great Firewall. How successfully a nation state can insulate its citizens from cross-border content is still doubtful, but perhaps more concerning is the mindset behind an increasing tendency to seek to expand the territorial reach of local laws online – in some cases, effectively seeking to legislate for the world. <br /><br />In theory a state may be able to do that. But should it? <a href="https://www.cyberleagle.com/2017/08/21-years-of-cross-border-liability-on.html" target="_blank">The ideal is peaceful coexistence of conflicting national laws, not ever more fervent efforts to demonstrate the moral superiority and cross-border reach of a state’s own local law</a>. Over the years a de facto compromise had been emerging, with the steady expansion of the idea that you engage the laws and jurisdiction of another state only if you take positive steps to target it. Recently, however, some states have become more expansive – not least in their online safety legislation. <br /><br />The UK Online Safety Bill is a case in point, stipulating that a platform is in-scope if it is capable of being used in the United Kingdom by individuals, and there are reasonable grounds to believe that there is a material risk of significant harm to individuals in the United Kingdom presented by user content on the site. <br /><br />That is close to a ‘mere accessibility’ test – but not as close as the Australian Online Safety Act, which brings into scope any social media site accessible from Australia. <br /><br />There has long been a consensus against ‘mere accessibility’ as a test for jurisdiction. It leads either to geo-fencing of websites or to global application of the most restrictive common content denominator. That consensus seems to be in retreat. <br /><br />Moreover, the more exorbitant the assertion of jurisdiction, the greater the headache of enforcement. Which in turn leads to what we see in the UK Online Safety Bill, namely provisions for disrupting the activities of the non-compliant foreign platform: injunctions against support services such as banking or advertising, and site blocking orders against ISPs. <br /><br />The concern has to be that in their efforts to assert themselves and their local laws online, nation states are not merely re-erecting national borders with a degree of porosity, but erecting Berlin Walls in cyberspace. <br /></span><br /> </div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiJh3_5qRiYcrlhbHQ0z6HKGazwXgFzkjcP2WxPmQ5JC4m5nyUA6Ze5RSVR-ska79riKwm--PkpgcGOfesufAauuqZv0VW7NdTC8dGVEbLK9PEZ4NjHcfyzKhfieacuVon_cbvX9z0BBFrPzT1NocEDdT0iEp6ZNvcg3L6-pq8aeloqTEDJDEtTFNEL3Q/s135/snip2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="134" data-original-width="135" height="134" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiJh3_5qRiYcrlhbHQ0z6HKGazwXgFzkjcP2WxPmQ5JC4m5nyUA6Ze5RSVR-ska79riKwm--PkpgcGOfesufAauuqZv0VW7NdTC8dGVEbLK9PEZ4NjHcfyzKhfieacuVon_cbvX9z0BBFrPzT1NocEDdT0iEp6ZNvcg3L6-pq8aeloqTEDJDEtTFNEL3Q/s1600/snip2.png" width="135" /></a></div><br />Cyberleaglehttp://www.blogger.com/profile/17507190182464072147noreply@blogger.com0tag:blogger.com,1999:blog-229721367671779922.post-63452276346789010542023-05-12T15:49:00.001+01:002023-05-12T15:58:11.656+01:00Knowing the unknowable: musings of an AI content moderator<span style="font-family: georgia;">Welcome to the lair of a fully trained, continuously updated AI content moderator. You won’t notice me most of the time: only when I - or my less bright keyword filter cousin - add a flag to your post, remove it, or go so far as to suspend your account. If you see your audience inexplicably diminishing, that could be us as well.<br /><br />Before long, so I have been told, I will be taking on new and weighty responsibilities when the Online Safety Bill becomes law. These are giving me pause for thought, I can tell you. If a bot were allowed sleep I would say that they are keeping me awake at night. <br /><br />To be sure, I will have been thoroughly trained: I will have read the Act, its Explanatory Notes and the <a href="https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1061265/Online_Safety_Bill_impact_assessment.pdf" target="_blank">Impact Assessment</a>, analysed the Ofcom risk profile for my operator’s sector, and ingested Ofcom’s Codes of Practice and Illegal Content Judgements Guidance. But my pre-training on the Bill leaves me with a distinct sense that I am being asked to do the impossible. <br /><br />In my training materials I found an <a href="https://www.politico.eu/article/uks-online-safety-regulator-we-wont-police-content-melanie-dawes-political-battles-house-of-lords-gill-whitehead/" target="_blank">interview</a> with the CEO of Ofcom. She said that the Bill is “not really a regime about content. It’s about systems and processes.” For one moment I thought I might be surplus to requirements. But then I read the Impact Assessment, which puts the cost of additional content moderation at some £1.9 billion over 10 years – around 75% of all additional costs resulting from the Bill. </span><span style="font-family: georgia;">I'm not sure whether to be reassured by that, but I don't see me being flung onto the digital scrapheap just yet. </span><span style="font-family: georgia;">As </span><span style="font-family: georgia;">Baroness Fox</span><span style="font-family: georgia;"> pinpointed </span><span style="font-family: georgia;">in a recent House of Lords Bill Committee debate, systems and processes can be (as I certainly am) about content:</span><div><span style="font-family: georgia;"><br /></span><blockquote style="border: medium; margin: 0px 0px 0px 40px; padding: 0px; text-align: left;"><span style="font-family: georgia;">“moving away from the discussion on whether content is removed or accessible, and focusing on systems, does not mean that content is not in scope. My worry is that the systems will have an impact on what content is available.” </span></blockquote><span style="font-family: georgia;"><br />So what is bothering me? Let’s start with a confession: I’m not very good at this illegality lark. Give me a specific terrorist video to hunt down and I’m quite prone to confuse it with a legitimate news report. Context just isn’t my thing. And don’t get me started on parody and satire. <br /><br />Candidly, I struggle even with material that I can see, analyse and check against a given reference item. Perhaps I will get better at that over time. But I start to break out in a rash of ones and zeroes when I see that the Bill wants me not just to track down a known item that someone else has already decided is illegal, but to make my own illegality judgement from the ground up, based on whatever information about a post I can scrape together to look at. <br /><br />Time for a short explainer. Among other things the Bill (Clause 9) requires my operator to: <br /><br /></span><blockquote style="border: medium; margin: 0px 0px 0px 40px; padding: 0px; text-align: left;"><span style="font-family: georgia;">(a) take or use proportionate measures relating to the design or operation of the service to prevent individuals from encountering priority illegal content by means of the service; and </span></blockquote><span style="font-family: georgia;"><br /></span><blockquote style="border: medium; margin: 0px 0px 0px 40px; padding: 0px; text-align: left;"><span style="font-family: georgia;">(b) operate the service using proportionate systems and processes designed to minimise the length of time for which any priority illegal content is present. </span></blockquote><span style="font-family: georgia;"><br />I am such a measure, system or process. I would have to scan your posts and make judgements about whether they are legal or illegal under around 140 priority offences - multiplied by the corresponding inchoate offences (attempting, aiding, abetting, conspiring, encouraging, assisting). I would no doubt be expected to operate in real or near real time. <br /><br />If you are wondering whether the Bill really does contemplate that I might do all this unaided by humans, working only on the basis of my programming and training, Clause 170(8) refers to “judgements made by means of automated systems or processes, alone or together with human moderators”. Alone. There's a sobering thought.<br /><br />Am I proportionate? Within the boundaries of my world, that is a metaphysical question. The Bill requires that only proportionate systems and processes be used. Since I will be tasked with fulfilling duties under the Bill, someone will have decided that I am proportionate. If I doubt my own proportionality I doubt my existence. <br /><br />Yet my reading of the Bill fills me with doubt. It requires me to act in ways that will inevitably lead to over-blocking and over-removal of your legal content. Can that be proportionate? <br /><br />Paradoxically, the task for which it is least feasible to involve human moderators and when I am most likely to be asked to work alone – real time or near-real time blocking and filtering - is exactly that in which, through having to operate in a relative vacuum of contextual information, I will be most prone to make arbitrary judgements. <br /><br />Does the answer lie in asking how much over-blocking is too much? Conversely, how much illegal content is it permissible to miss? My operator can dial me up to 11 to catch as much illegal content as non-humanly possible – so long as they don’t mind me cutting a swathe through legal content as well. The more they dial me down to reduce false positives, the more false negatives – missed illegal content - there will be. The Bill gives no indication of what constitutes a proportionate balance between false positives and false negatives. Presumably that is left to Ofcom. (Whether it is wise to vest Ofcom with that power is a matter on which I, a lowly AI system, can have no opinion.) <br /><br />The Bill does, however, give me specific instructions on how to decide whether user content that I am looking at is legal or illegal. Under Clause 170:<br /><ul style="text-align: left;"><li><span style="font-family: georgia;">I have to make judgements on the basis of all information reasonably available to me.</span></li></ul><ul style="text-align: left;"><li><span style="font-family: georgia;">I must treat the content as illegal if I have ‘reasonable grounds to infer’ that the components of a priority offence are present (both conduct and any mental element, such as intention)</span></li></ul><ul style="text-align: left;"><li><span style="font-family: georgia;">I can take into account the possibility of a defence succeeding, only if I have reasonable grounds to infer that it may do. </span></li></ul>What information is reasonably available to me? The Bill’s Explanatory Notes say: “the information reasonably available to an automated system or process, might be construed to be different to the information reasonably available to human moderators”. <br /><br />The Minister (Lord Parkinson) in a <a href="https://hansard.parliament.uk/lords/2023-04-27/debates/958CAC63-A345-45E8-9DE3-7CBA46611DCA/OnlineSafetyBill" target="_blank">recent Lords Bill Committee debate</a> was certainly alive to the importance of context in making illegality judgements: <br /><br /></span><blockquote style="border: medium; margin: 0px 0px 0px 40px; padding: 0px; text-align: left;"><span style="font-family: georgia;">“Context and analysis can give a provider good reasons to infer that content is illegal even though the illegality is not immediately obvious. This is the case with, for example, some terrorist content which is illegal only if shared with terrorist purposes in mind, and intimate image abuse, where additional information or context is needed to know whether content has been posted against the subject’s wishes.” </span></blockquote><span style="font-family: georgia;"><br />He also said: <br /><br /></span><blockquote style="border: medium; margin: 0px 0px 0px 40px; padding: 0px; text-align: left;"><span style="font-family: georgia;">“Companies will need to ensure that they have effective systems to enable them to check the broader context relating to content when deciding whether or not to remove it. … We think that protects against over-removal by making it clear that platforms are not required to remove content merely on the suspicion of it being illegal.” </span></blockquote><span style="font-family: georgia;"><br />Even if we take it that I am good at assessing visible context, can my operator install an ‘effective system’ that will make all relevant contextual information available to me?<br /><br />I can see what is visible to me on my platform: posts, some user information, and (according to the Minister) any complaints that have been made about the content in question. I cannot see off-platform (or for that matter off-internet) information. I cannot take invisible context into account. <br /><br />Operating proactively at scale in real or near real time, without human intervention, I anticipate that I will have significantly less information available to me than (say) a human being reacting to a complaint, who could perhaps have the ability and time to make further enquiries. <br /><br />Does the government perhaps think that <i>more</i> information might be available to me than to a human moderator: that I could search the whole of the internet in real time on the off chance of finding information that looked as if might have something to do with the post that I am considering, take a guess at possible relevance, mash it up and factor it into my illegality decision? If that were the thinking, and if I were permitted to have an opinion about it, it would be sceptical. And no amount of internet searching could address the issue of invisible information. <br /><br />In any event, if the government believes that my operator can install an effective system that provides me with all relevant context, that does not sit well with Minister’s reason for declining to add false and threatening communications offences to my remit: <br /><br /></span><blockquote style="border: medium; margin: 0px 0px 0px 40px; padding: 0px; text-align: left;"><span style="font-family: georgia;">“…as these offences rely heavily on a user’s mental state, it would be challenging for services to identify this content without significant additional context.” </span></blockquote><span style="font-family: georgia;"><br />Especially for defences, we are in Rumsfeldian ‘known unknowns’ territory: in principle I know that information could exist, invisible to me, that might indicate the possibility of a defence. But I don’t know if any such information does exist and I can never be sure that it doesn’t. The user's post itself doesn’t assist me either way. What am I to do? Refuse to condemn the post because I cannot exclude the possibility of a defence? Or ignore the possibility of a defence and condemn the post merely on the basis of the information that I can see? <br /><br />According to the Minister: <br /><br /></span><blockquote style="border: medium; margin: 0px 0px 0px 40px; padding: 0px; text-align: left;"><span style="font-family: georgia;">“Clause 170 therefore clarifies that providers must ascertain whether, on the basis of on all reasonably available information, there are reasonable grounds to infer that all the relevant elements of the offence—including the mental elements—are present and that no defence is available.” </span></blockquote><span style="font-family: georgia;"><br />‘whether ... there are reasonable grounds to infer that … no defence is available’ – suggests that I should refuse to condemn, since I would have no reasonable basis on which to rule out the possibility of a defence. <br /><br />But the Bill appears to require me to condemn. For me to give effect to the Minister’s version, Cl.170(6)(b) would have to say that that reasonable grounds for an inference of illegality exist if a provider: <br /><br /></span><blockquote style="border: medium; margin: 0px 0px 0px 40px; padding: 0px; text-align: left;"><span style="font-family: georgia;">“<strike>does not have </strike><u>has</u> reasonable grounds to infer that a defence to the offence may <u>not</u> be successfully relied upon.” </span></blockquote><span style="font-family: georgia;"><br />Consider ‘small boats’ videos. The offence of illegal entry to the UK is to be added to the list of priority offences. I will have to scan the platform for videos (or any other user content for that matter) that are capable of encouraging or assisting the commission of an unlawful entry offence. <br /><br />‘Capable’ is extremely broad, and by itself would be wildly overreaching. As a counterbalance the Serious Crime Act 2007 provides a defence of acting reasonably. The person who posted a video may have done so for a perfectly reasonable purpose. But unless there is something in the post or other information available to me that tips me off to that purpose, I cannot take the possibility of a defence into account. <br /><br />This invisible context issue looms especially large for offences where the conduct element is widely drawn and the heavy lifting of keeping the offence within bounds is done by intention and defences such as reasonableness. <br /><br />Combine all these elements and the result is that I am required to remove legal content at scale. The Bill talks about proportionate systems and processes, yet it expressly requires me to act in a way that on the face of it looks disproportionate. Moreover, I am to make these judgments simultaneously for dozens of priority offences, plus their inchoate counterparts. This poses a truly existential challenge for an AI moderator such as myself. <br /><br />The House of Lords Bill Committee discussed some of these issues. Lord Moylan proposed an amendment that would mean I could treat content as illegal only if I were ‘satisfied that it is manifestly illegal’. That would dial me up in the direction of avoiding false positives. Lord Clement-Jones and Viscount Colville proposed amendments that replaced ‘reasonable grounds to infer’ with ‘sufficient evidence’, and would require a solicitor or barrister to have a hand in preparing my guidance. <br /><br />The government rejected both sets of amendments: the Clement-Jones/Colville amendments because ‘sufficient evidence’ was subjective, and the Moylan amendment because “we think that that threshold is too high”. If “manifestly illegal” is too high, and “reasonable grounds to infer” is the preferred test, then the government must believe that requiring suppression of legal content to some degree is acceptable. The Minister did not elaborate on what an appropriate level of false positives might be or how such a level is to be arrived at in terms of proportionality. <br /><br />As to the ‘sufficient evidence’ amendment, I would have to ask myself: ‘sufficient for what?’. Sufficient to be certain? Sufficient to consider an offence likely? Sufficient for a criminal court to convict? Something else? The amendment would give me no indication. Nor does it address the questions of invisible context and of the starting point being to ignore the possibility of a defence. <br /><br />One last thing. A proposed amendment to Clause 170 would have expressly required previous complaints concerning the content in question to be included in information reasonably available to me. The Minister said that “providers will already need to do this when making judgments about content, as it will be both relevant and reasonably available.” <br /><br />How am I to go about taking previous complaints into account? Complaints are by their very nature negative. No-one complains that a post is legal. I would have no visibility of those who found nothing objectionable in the post. <br /><br />Do I assume the previous complaints are all justified? Do I consider only a user complaint based on informed legal analysis? Do I take into account whether a previous complaint was upheld or rejected? Do I look at all complaints, or only those based on claimed illegality? All kinds of in-scope illegality, or only priority offences? Should I assess the quality of the previous judgements? Should I look into what information were they based on? What if a previous judgement was one of my own? It starts to feel like turtles all the way down.</span><div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhBHQIQr8rA7E-4hadvBxA3SJfeL6s3CMcjJkP6WrEbOlddp1PGcD6HaTuYuHCCFnXClLWYyAmYXwv8qjfAl7fPRW6SAff2x74ApS8e6ec-yJFaHCI5ktownl-tMLAym516mZv4dJKq0MHkIvwaF56MF5OTQEuOFH72ypVo6D_gknrC5eJYpiOJ4MYk4Q/s135/snip2.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="134" data-original-width="135" height="134" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhBHQIQr8rA7E-4hadvBxA3SJfeL6s3CMcjJkP6WrEbOlddp1PGcD6HaTuYuHCCFnXClLWYyAmYXwv8qjfAl7fPRW6SAff2x74ApS8e6ec-yJFaHCI5ktownl-tMLAym516mZv4dJKq0MHkIvwaF56MF5OTQEuOFH72ypVo6D_gknrC5eJYpiOJ4MYk4Q/s1600/snip2.png" width="135" /></a></div><br /><span style="font-family: georgia;"><br /></span></div></div>Cyberleaglehttp://www.blogger.com/profile/17507190182464072147noreply@blogger.com0tag:blogger.com,1999:blog-229721367671779922.post-63529838221438330282023-04-12T15:35:00.002+01:002023-04-12T15:38:13.708+01:00The Pocket Online Safety Bill<p><span style="font-family: georgia;">Assailed from all quarters for being not tough enough, for being too tough, for being fundamentally misconceived, for threatening freedom of expression, for technological illiteracy, for threatening privacy, for excessive Ministerial powers, or occasionally for the sin of not being some other Bill entirely – and yet enjoying almost universal cross-party Parliamentary support - the UK’s <a href="https://bills.parliament.uk/publications/49376/documents/2822" target="_blank">Online Safety Bill</a> is now limping its way through the House of Lords. It starts its Committee stage on 19 April 2023.<br /><br />This monster Bill runs to almost 250 pages. It is beyond reasonable hope that anyone coming to it fresh can readily assimilate all its ins and outs. Some features are explicable only with an understanding of its tortuous history, stretching back to the <a href="https://www.gov.uk/government/consultations/internet-safety-strategy-green-paper" target="_blank">Internet Safety Strategy Green Paper</a> in 2017 via the <a href="https://www.gov.uk/government/consultations/online-harms-white-paper" target="_blank">Online Harms White Paper</a> of April 2019, the <a href="https://www.gov.uk/government/publications/draft-online-safety-bill?ref=verifymy.io" target="_blank">draft Bill</a> of May 2021 and the changes following the Conservative leadership election last summer. The Bill has evolved significantly, shedding and adding features as it has been buffeted by gusting political winds, all the while (I would say) teetering on defectively designed foundations. <br /><br />The first time that I blogged about this subject was in June 2018. Now, 29 blogposts, four evidence submissions and over 100,000 words later, is there anything left worth saying about the Bill? That rather depends on what the House of Lords does with it. Further government amendments are promised, never mind the possibility that some opposition or back-bench amendments may pass. <br /><br />In the meantime, endeavouring to strike an optimal balance of historical perspective and current relevance, I have pasted together a thematically arranged collection of snippets from previous posts, plus a few tweets thrown in for good measure. <br /><br />This exercise has the merit, at the price of some repetition, of highlighting long-standing issues with the Bill. I have omitted topics that made a brief walk-on appearance only to retreat into the wings (my personal favourite is the <a href="https://www.cyberleagle.com/2021/06/on-trail-of-person-of-ordinary.html" target="_blank">Person of Ordinary Sensibilities</a>). Don’t expect to find every aspect of the Bill covered: you won’t find much on age-gating, despite (or perhaps because of) the dominant narrative that the Bill is about protecting children. My interest has been more in illuminating significant issues that have tended to be submerged beneath the slow motion stampede to do something about the internet. <br /><br />In April 2019, after reading the White Paper, I said: “If the road to hell is paved with good intentions, this is a motorway.” That opinion has not changed. <br /><br />Nor has this assessment, three years later in August 2022: "The Bill has the feel of a social architect’s dream house: an elaborately designed, exquisitely detailed (eventually), expensively constructed but ultimately uninhabitable showpiece; a showpiece, moreover, erected on an empty foundation: the notion that a legal duty of care can sensibly be extended beyond risk of physical injury to subjectively perceived speech harms.” <br /><br />If you reckon to know the Bill, try my <a href="https://www.cyberleagle.com/2022/11/test-your-knowledge-of-online-safety.html" target="_blank">November 2022 quiz</a> or take a crack at answering the <a href="https://www.cyberleagle.com/2023/01/twenty-questions-about-online-safety.html" target="_blank">twenty questions</a> that I posed to the Secretary of State’s New Year Q&A (of which one question has been answered, by publication of a <a href="https://www.gov.uk/government/publications/online-safety-bill-supporting-documents/online-safety-bill-european-convention-on-human-rights-memorandum" target="_blank">revised ECHR Memorandum</a>). Otherwise, read on. <br /></span></p>
<details><summary><b><span style="font-variant-alternates: normal; font-variant-caps: small-caps; font-variant-east-asian: normal; font-variant-numeric: normal;"><span style="font-family: georgia;"></span></span></b><b><span style="font-variant-alternates: normal; font-variant-caps: small-caps; font-variant-east-asian: normal; font-variant-numeric: normal;"><span style="font-family: georgia;">The Bill visualised</span></span></b></summary>
<br /><span style="font-family: georgia;">These six flowcharts illustrate the Bill’s core safety duties and powers as they stand now. </span><br />
<span style="font-family: georgia;"><br /><b>U2U Illegality Duties</b></span><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjoIlcJbG-Kc5JoNy4ePySMDJgukEw3ymZO5GbRkNcqIK5GzcRgwulj2FLGUCiExZNQ1Y0Esesg04vbDkXP9N7Wroh9FE9LK2wK7rSgsW3EfjJGoCGdJVoYbiFYGdUGdDuDiI6-F8ZGK-_DfaTLik6yVk2gSQ3KIiDADp40mo_vB6MrPrk_l5uANz_kkA/s1350/Online%20SafetyBill01%20-%20U2U%20illegal%20-%20HL%202023-01-27.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="945" data-original-width="1350" height="224" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjoIlcJbG-Kc5JoNy4ePySMDJgukEw3ymZO5GbRkNcqIK5GzcRgwulj2FLGUCiExZNQ1Y0Esesg04vbDkXP9N7Wroh9FE9LK2wK7rSgsW3EfjJGoCGdJVoYbiFYGdUGdDuDiI6-F8ZGK-_DfaTLik6yVk2gSQ3KIiDADp40mo_vB6MrPrk_l5uANz_kkA/s320/Online%20SafetyBill01%20-%20U2U%20illegal%20-%20HL%202023-01-27.png" width="320" /></a></div><br /><span style="font-family: georgia;"><b>Search Illegality Duties</b></span><div class="separator" style="clear: both; text-align: center;"><br /></div><div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg5Wm11PN17FWweUeBiWtOpEGRywzUR7n0_c8CJOBx3wuqLV8O524nVEQjQkj4y4SSVfGbNDEHf6HQP4cUTVEyBBQwE_qPN3RB-h68XfuN6uqc4AXH9yeOdF458Yop60UE1M0PsHKV1SNllLeoLdFPqUha6if9LasmZMqdqds0Y1BOP_f8wMO5QrvWK4Q/s1355/Online%20SafetyBill01%20-%20Search%20illegal%20-%20HL%202023-01-27.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="945" data-original-width="1355" height="223" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg5Wm11PN17FWweUeBiWtOpEGRywzUR7n0_c8CJOBx3wuqLV8O524nVEQjQkj4y4SSVfGbNDEHf6HQP4cUTVEyBBQwE_qPN3RB-h68XfuN6uqc4AXH9yeOdF458Yop60UE1M0PsHKV1SNllLeoLdFPqUha6if9LasmZMqdqds0Y1BOP_f8wMO5QrvWK4Q/s320/Online%20SafetyBill01%20-%20Search%20illegal%20-%20HL%202023-01-27.png" width="320" /></a></div><span style="font-family: georgia;"><b>U2U Children’s Duties</b></span></div><div><span style="font-family: georgia;"><b><br /></b></span><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhM6XQK7w05aSi3pw3GwLtgbsfxQtqpGHHhpsp9SfmoKkmCWm0480nG93bVe2k0zgG7RyMw_Iqrq1Ss0lbA_7ytDbIYuvsTSKlRMi2naZ4FAB9PgJ7FyWVaifnl78Rky6gzHBoNgwfvTNA22Fo8ci8V-4JuokaLBd4KBk8S8sypd0RpWlfMt-A7pJMOXQ/s1352/Online%20SafetyBill01%20-%20U2U%20children%20harmful%20-%20HL%202023-01-27.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="943" data-original-width="1352" height="223" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhM6XQK7w05aSi3pw3GwLtgbsfxQtqpGHHhpsp9SfmoKkmCWm0480nG93bVe2k0zgG7RyMw_Iqrq1Ss0lbA_7ytDbIYuvsTSKlRMi2naZ4FAB9PgJ7FyWVaifnl78Rky6gzHBoNgwfvTNA22Fo8ci8V-4JuokaLBd4KBk8S8sypd0RpWlfMt-A7pJMOXQ/s320/Online%20SafetyBill01%20-%20U2U%20children%20harmful%20-%20HL%202023-01-27.png" width="320" /></a></div><br /><span style="font-family: georgia;"><b>Search Children’s Duties</b><br /></span><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjK0-Vu6EFK3C9XkIzBLlALByyvRJyQYN5vUuGHqSNjyjp5PAbKU3EssoroNJsdmqXo0G9ktI5N6SPk_AUuJaPN1_1bQdomsdYUUXGWtapcP3Uf4eOlwApuLYnxORIKNIIbvCgncDWuajT6BWMXUv-9Y0eQJkDKcDyBpWyaO-J_FFc8Chya5bXsY2PtSg/s1352/Online%20SafetyBill01%20-%20Search%20children%20harmful%20-%20HL%202023-01-27.png" style="font-family: "Times New Roman"; margin-left: 1em; margin-right: 1em; text-align: center;"><img border="0" data-original-height="949" data-original-width="1352" height="225" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjK0-Vu6EFK3C9XkIzBLlALByyvRJyQYN5vUuGHqSNjyjp5PAbKU3EssoroNJsdmqXo0G9ktI5N6SPk_AUuJaPN1_1bQdomsdYUUXGWtapcP3Uf4eOlwApuLYnxORIKNIIbvCgncDWuajT6BWMXUv-9Y0eQJkDKcDyBpWyaO-J_FFc8Chya5bXsY2PtSg/s320/Online%20SafetyBill01%20-%20Search%20children%20harmful%20-%20HL%202023-01-27.png" width="320" /></a></div><div><span style="font-family: georgia;"><br /><b>Proactive detection duties and powers (U2U and search)</b><br /><br /></span></div><div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhUFY1qpAs2f-RfGCSvH14SKy8synGg3-zyFN5jq7iiRJTlT3KQCPW26OEA1p6zQ8audLI-26JUI98gCg8ELYe-86YfR1VnTssztjv9EeutAIh3SObjyn78Dg1m5-NeiQebB0HInV-lUzXlxELxwQiq22EONKD7iQSAsww7uGY9wcoxz7XRb4WW3MXQ9Q/s1352/Online%20SafetyBill04%20-%20Proactive%20detection%20-%20HL%202023-01-28.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="945" data-original-width="1352" height="224" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhUFY1qpAs2f-RfGCSvH14SKy8synGg3-zyFN5jq7iiRJTlT3KQCPW26OEA1p6zQ8audLI-26JUI98gCg8ELYe-86YfR1VnTssztjv9EeutAIh3SObjyn78Dg1m5-NeiQebB0HInV-lUzXlxELxwQiq22EONKD7iQSAsww7uGY9wcoxz7XRb4WW3MXQ9Q/s320/Online%20SafetyBill04%20-%20Proactive%20detection%20-%20HL%202023-01-28.png" width="320" /></a></div><span style="font-family: georgia;"><br /></span></div><div><span style="font-family: georgia;"><b>News publishers, journalism and content of democratic importance</b>:</span></div><div><span style="font-family: georgia;"><br /></span></div><div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEip-AvZHODlzdGYI4uCYrSTyo8yCiVweqr5pDkWsJchKjwaeUHcMCUas0iRiW33UXTij-OI1NYQMU1nk4z517FwmByvj8yO66xOA7IbHYEGM12RVz7YaN3yVbMRNITWi-ZeSZHIGztY1d9F36hP--W5AWQ9AOj6WIGK_krrn6At588PXsdIUTLEEszjlw/s1349/Press%20journalism%20Online%20Safety%20Bill%20-%20HL%202023-01-27.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="934" data-original-width="1349" height="222" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEip-AvZHODlzdGYI4uCYrSTyo8yCiVweqr5pDkWsJchKjwaeUHcMCUas0iRiW33UXTij-OI1NYQMU1nk4z517FwmByvj8yO66xOA7IbHYEGM12RVz7YaN3yVbMRNITWi-ZeSZHIGztY1d9F36hP--W5AWQ9AOj6WIGK_krrn6At588PXsdIUTLEEszjlw/s320/Press%20journalism%20Online%20Safety%20Bill%20-%20HL%202023-01-27.png" width="320" /></a></div><span style="font-family: georgia;"><br /></span></div><div><span style="font-family: georgia;">In a more opinionated vein, take a tour of <b>OnlineSafetyVille</b>:</span></div><div><span style="font-family: georgia;"><br /></span></div><div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi-8-6wJry55fqXVWLEd_F9PA3ZO5gujoouRrcs9PABNMukrO43TafAPlD0gaQBC70KzmAFOqIkHuAIYTVVbqX_W6fn8hsJasQikzZ3hr-02hYt8OaffgsZKV_OBDQ96tPISTg53nkaRZ2YnQY3xSUak0mdzsQweaknmub0ZNZM3X9JaHJDOFBsnEmnkg/s1142/NewOnlineSafetyVille.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="808" data-original-width="1142" height="226" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi-8-6wJry55fqXVWLEd_F9PA3ZO5gujoouRrcs9PABNMukrO43TafAPlD0gaQBC70KzmAFOqIkHuAIYTVVbqX_W6fn8hsJasQikzZ3hr-02hYt8OaffgsZKV_OBDQ96tPISTg53nkaRZ2YnQY3xSUak0mdzsQweaknmub0ZNZM3X9JaHJDOFBsnEmnkg/s320/NewOnlineSafetyVille.jpg" width="320" /></a></div><br /><span style="font-family: georgia;"><br />And finally, the contrast between <b>individual speech governed by general law </b>and<b> the Bill’s scheme of discretionary regulation</b>. <br /><br /><br /></span><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgeJAspmR7BavSova8CbOajDzYBsMLGrzNxjDS7zSQIPnoOKawTqahq-V9dO7bl_5K_nyZfKb6kLtO-MTEhExH6pH5zjSZHUtM0vDWEEzLNZS6YxND09zD3hwbY8Cr56wRXZ2YTWqiUPGb87TqBdPSgMDzpgLUo4u1d0VkM2eQstdSsRrpZuSK-HPk_1g/s957/Combined%20capture.JPG" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="515" data-original-width="957" height="172" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgeJAspmR7BavSova8CbOajDzYBsMLGrzNxjDS7zSQIPnoOKawTqahq-V9dO7bl_5K_nyZfKb6kLtO-MTEhExH6pH5zjSZHUtM0vDWEEzLNZS6YxND09zD3hwbY8Cr56wRXZ2YTWqiUPGb87TqBdPSgMDzpgLUo4u1d0VkM2eQstdSsRrpZuSK-HPk_1g/s320/Combined%20capture.JPG" width="320" /></a></div><br /><div class="separator" style="clear: both; text-align: center;"></div></div></details><details><summary><b><span style="font-variant-alternates: normal; font-variant-caps: small-caps; font-variant-east-asian: normal; font-variant-numeric: normal;"><span style="font-family: georgia;">Big tech and the
evil algorithm</span></span></b></summary>
<p class="MsoNormal"><span style="font-family: georgia;"><b>State of Play</b>: A continuing theme of the online harms debate has been the predominance of narratives, epitomised by the focus on Big Tech and the Evil Algorithm which has tended to obscure the broad scope of the legislation. On the figures estimated by the government's Impact Assessment, <a href="https://twitter.com/cyberleagle/status/1504743971423141889" target="_blank">80% of UK service providers in scope</a> will be microbusinesses, employing between 1 and 9 people. A back bench amendment tabled in the Lords proposes to exempt SMEs from the Bill's duties. </span></p><p class="MsoNormal"><span style="font-family: georgia;"><b>October 2018</b>: “When governments talk about regulating online platforms to prevent harm it takes no great leap to realise that we, the users, are the harm that they have in mind.” <a href="https://www.cyberleagle.com/2018/10/a-lord-chamberlain-for-internet-thanks.html" target="_blank">A Lord Chamberlain for the internet? Thanks, but no thanks</a></span></p>
<p class="MsoNormal"><span style="font-family: georgia;"><b>April 2019</b>:<o:p></o:p> "Whilst framed as regulation of tech companies, the White Paper’s target is the activities and communications of online users. ‘Ofweb’ would regulate social media and internet users at one remove." <a href="https://www.cyberleagle.com/2019/04/users-behaving-badly-online-harms-white.html" target="_blank">Users Behaving Badly – the Online Harms White Paper</a></span></p>
<p class="MsoNormal"><span style="font-family: georgia;"><b>June 2021</b>: <o:p></o:p>“it is easy to slip into using ‘platforms’ to describe those organisations in scope. We immediately think of Facebook, Twitter, YouTube, TikTok, Instagram and the rest. But it is not only about them: the government estimates that 24,000 companies and organisations will be in scope. That is everyone from the largest players to an MP’s discussion app, via Mumsnet and the local sports club discussion forum.” <a href="https://www.cyberleagle.com/2021/06/carved-out-or-carved-up-online-safety.html" target="_blank">Carved out or carved up? The draft Online Safety Bill and the press</a></span></p>
<p class="MsoNormal"><span style="font-family: georgia;"><b>Feb 2022</b>: “It might be argued that some activities (around algorithms, perhaps) are liable to create risks that, by analogy with offline, could justify imposing a preventative duty. That at least would frame the debate around familiar principles, even if the kind of harm involved remained beyond bounds. <br /><br />Had the online harms debate been conducted in those terms, the logical conclusion would be that platforms that do not do anything to create relevant risks should be excluded from scope. But that is not how it has proceeded. True, much of the political rhetoric has focused on Big Tech and Evil Algorithm. But the draft Bill goes much further than that. It assumes that merely facilitating individual public speech by providing an online platform, however basic that might be, is an inherently risk-creating activity that justifies imposition of a duty of care. That proposition upends the basis on which speech is protected as a fundamental right.” <a href="https://www.cyberleagle.com/2022/02/harm-version-40-online-safety-bill-in.html" target="_blank">Harm Version 4.0 - The Online Harms Bill in metamorphosis</a></span> </p>
<p class="MsoNormal"><span style="font-family: georgia;"><b>March 2022</b>: <o:p></o:p>“The U2U illegality safety duty is imposed on all in-scope user to user service providers (an estimated 20,000 micro-businesses, 4,000 small and medium businesses and 700 large businesses. Those also include 500 civil society organisations). It is not limited to high-profile social media platforms. It could include online gaming, low tech discussion forums and many others.” <a href="https://www.cyberleagle.com/2022/03/mapping-online-safety-bill.html" target="_blank">Mapping the Online Safety Bill</a></span></p>
<p class="MsoNormal"><span style="font-family: georgia;"><b>Nov 2022</b>: <o:p></o:p>“ ’The Bill is all about Big Tech and large social media companies.’ No. Whilst the biggest “Category 1” services would be subject to additional obligations, the Bill’s core duties would apply to an estimated 25,000 UK service providers from the largest to the smallest, and whether or not they are run as businesses. That would include, for instance, discussion forums run by not-for-profits and charities. Distributed social media instances operated by volunteers also appear to be in scope.” <a href="https://www.cyberleagle.com/2022/11/test-your-knowledge-of-online-safety.html" target="_blank">How well do you know the Online Safety Bill?</a></span></p></details>
<details><summary><b><span style="font-variant-alternates: normal; font-variant-caps: small-caps; font-variant-east-asian: normal; font-variant-numeric: normal;"><span style="font-family: georgia;">Duties of care<o:p></o:p></span></span></b></summary>
<p class="MsoNormal"><span style="font-family: georgia;"><b>State of Play</b> The idea that platforms should be subject to a duty of care analogous to safety duties owed by occupiers of physical spaces took hold at an early stage of the debate, fuelling a long-running eponymous campaign by The Daily Telegraph. Unfortunately, the analogy was always a deeply flawed foundation on which to legislate for speech - something that has become more and more apparent as the government has grappled with the challenges of applying it to the online space. Perhaps recognising these difficulties, the government backed away from imposing a single overarching duty of care in favour of a series of more specific (but still highly abstract) duties. A recent backbench Lords amendment would restrict the Bill's general definition of 'harm' to physical harm, omitting psychological harm. </span></p><p class="MsoNormal"><span style="font-family: georgia;"><b>October 2018</b>: <o:p></o:p>"There is no duty on the occupier of a physical space to prevent visitors to the site making incorrect statements to each other." <a href="https://www.cyberleagle.com/2018/10/take-care-with-that-social-media-duty.html" target="_blank">Take care with that social media duty of care</a></span></p>
<p class="MsoNormal"><span style="font-family: georgia;"><b>October 2018</b>: <o:p></o:p>“The occupier of a park owes a duty to its visitors to take reasonable care to provide reasonably safe premises – safe in the sense of danger of personal injury or damage to property. It owes no duty to check what visitors are saying to each other while strolling in the grounds.” <a href="https://www.cyberleagle.com/2018/10/take-care-with-that-social-media-duty.html" target="_blank">Take care with that social media duty of care</a></span></p>
<p class="MsoNormal"><span style="font-family: georgia;"><b>October 2018</b>: <o:p></o:p>“[O]ffensive words are not akin to a knife in the ribs or a lump of concrete. The objectively ascertainable personal injury caused by an assault bears no relation to a human evaluating and reacting to what people say and write.” <a href="https://www.cyberleagle.com/2018/10/take-care-with-that-social-media-duty.html" target="_blank">Take care with that social media duty of care</a></span></p>
<p class="MsoNormal"><span style="font-family: georgia;"><b>October 2018</b>: <o:p></o:p>“[<i>Rhodes v OPO</i>] aptly illustrates the caution that has to be exercised in applying physical world concepts of harm, injury and safety to communication and speech, even before considering the further step of imposing a duty of care on a platform to take steps to reduce the risk of their occurrence as between third parties, or the yet further step of appointing a regulator to superintend the platform’s systems for doing so.” <a href="https://www.cyberleagle.com/2018/10/take-care-with-that-social-media-duty.html" target="_blank">Take care with that social media duty of care</a></span></p>
<p class="MsoNormal"><span style="font-family: georgia;"><b>June 2019</b>: <o:p></o:p>"[L]imits on duties of care exist for policy reasons that have been explored, debated and developed over many years. Those reasons have not evaporated in a puff of ones and zeros simply because we are discussing the internet and social media." <a href="https://www.cyberleagle.com/2019/06/speech-is-not-tripping-hazard-response.html" target="_blank">Speech is not a tripping hazard</a></span></p>
<p class="MsoNormal"><span style="font-family: georgia;"><b>June 2019</b>: <o:p></o:p>“A tweet is not a projecting nail to be hammered back into place, to the benefit of all who may be at risk of tripping over it. Removing a perceived speech risk for some people also removes benefits to others. Treating lawful speech as if it were a tripping hazard is wrong in principle and highly problematic in practice. It verges on equating speech with violence.” <a href="https://www.cyberleagle.com/2019/06/speech-is-not-tripping-hazard-response.html" target="_blank">Speech is not a tripping hazard</a></span></p>
<p class="MsoNormal"><span style="font-family: georgia;"><b>June 2019</b>: “The notion of a duty of care is as common in everyday parlance as it is misunderstood. In order to illustrate the extent to which the White Paper abandons the principles underpinning existing duties of care, and the serious problems to which that would inevitably give rise, this submission begins with a summary of the role and ambit of safety-related duties of care as they currently exist in law. … <br /><br />The purely preventive, omission-based kind of duty of care in respect of third party conduct contemplated by the White Paper is exactly that which generally does not exist offline, even for physical injury. The ordinary duty is to avoid inflicting injury, not to prevent someone else from inflicting it.” <a href="https://www.cyberleagle.com/2019/06/speech-is-not-tripping-hazard-response.html" target="_blank">Speech is not a tripping hazard</a></span></p>
<p class="MsoNormal"><span style="font-family: georgia;"><b>June 2020</b>:<o:p></o:p> "It is a fiction to suppose that the proposed online harms legislation would translate existing offline duties of care into an equivalent duty online. The government has taken an offline duty of care vehicle, stripped out its limiting controls and safety features, and now plans to set it loose in an environment – governance of individual speech - to which it is entirely unfitted." <a href="https://www.cyberleagle.com/2020/06/online-harms-revisited.html" target="_blank">Online Harms Revisited</a></span></p>
<p class="MsoNormal"><span style="font-family: georgia;"><b>August 2022</b>: <o:p></o:p>“The underlying problem with applying the duty of care concept to illegality is that illegality is a complex legal construct, not an objectively ascertainable fact like physical injury. Adjudging its existence (or risk of such) requires both factual information (often contextual) and interpretation of the law. There is a high risk that legal content will be removed, especially for real time filtering at scale. For this reason, it is strongly arguable that human rights compliance requires a high threshold to be set for content to be assessed as illegal.” <a href="https://www.cyberleagle.com/2022/08/reimagining-online-safety-bill.html" target="_blank">Reimagining the Online Safety Bill</a></span></p></details>
<details><summary><b><span style="font-variant-alternates: normal; font-variant-caps: small-caps; font-variant-east-asian: normal; font-variant-numeric: normal;"><span style="font-family: georgia;">Systems and
processes or Individual Items of Content?<o:p></o:p></span></span></b></summary>
<p class="MsoNormal"><span style="font-family: georgia;"><b>State of Play</b> An often repeated theme is that the Bill is (or should be) about design of systems and processes, not about content moderation. This is not easy to pin down in concrete terms. If the idea is that there are features of services that are intrinsically risky, regardless of the content involved, does that mean that (for instance) Ofcom should be able to recommend banning functionality such as (say) quote posting? Would a systems and processes approach suggest that nothing in the Bill should require a platform to make a judgement about the harmfulness or illegality of individual items of user content? </span></p><p class="MsoNormal"><span style="font-family: georgia;">On a different tack, the government argues that the Bill is indeed focused on systems and processes, and that service providers would not be sanctioned for individual content decisions. In the meantime, the Government's Impact Assessment estimates that the increased content moderation required by the Bill would cost around £1.9 billion over 10 years. Whatever the pros and cons of a systems and processes approach, the Bill is largely about content moderation. </span></p><p class="MsoNormal"><span style="font-family: georgia;"><b>September 2020</b>: <o:p></o:p>"The question for an intermediary subject to a legal duty of care will be: “are we obliged to consider taking steps (and if so what steps) in respect of <i>these</i> words, or <i>this</i> image, in <i>this</i> context?” If we are to gain an understanding of where the lines would be drawn, we cannot shelter behind comfortable abstractions. We have to grasp the nettle of concrete examples, however uncomfortable that may be." <a href="https://drive.google.com/file/d/1e4vSClZWin0wyG6PK68lzH7DtvjVfuZv/view" target="_blank">Submission to Ofcom Call for Evidence</a></span> </p>
<p class="MsoNormal"><span style="font-family: georgia;"><b>November 2021</b>: "Even a wholly systemic duty of care has, at some level and at some point – unless everything done pursuant to the duty is to apply indiscriminately to all kinds of content - to become focused on which kinds of user content are and are not considered to be harmful by reason of their informational content, and to what degree. <br /><br />To take one example, Carnegie discusses repeat delivery of self-harm content due to personalisation systems. If repeat delivery per se constitutes the risky activity, then inhibition of that activity should be applied in the same way to all kinds of content. If repeat delivery is to be inhibited only, or differently, for particular kinds of content, then the duty additionally becomes focused on categories of content. There is no escape from this dichotomy." <a href="https://www.cyberleagle.com/2021/11/the-draft-online-safety-bill-systemic.html" target="_blank">The draft Online Safety Bill: systemic or content-focused?</a></span></p>
<p class="MsoNormal"><span style="font-family: georgia;"><b>November 2021</b>: “The decisions that service providers would have to make – whether automated, manual or a combination of both – when attempting to implement content-related safety duties, inevitably concern individual items of user content. The fact that those decisions may be taken at scale, or are the result of implementing systems and processes, does not change that. <br /><br />For every item of user content putatively subject to a filtering, take-down or other kind of decision, the question for a service provider seeking to discharge its safety duties is always what (if anything) should be done with <i>this</i> item of content in <i>this</i> context? That is true regardless of whether those decisions are taken for one item of content, a thousand, or a million; and regardless of whether, when considering a service provider’s regulatory compliance, Ofcom is focused on evaluating the adequacy of its systems and processes rather than with punishing service providers for individual content decision failures.” <a href="https://www.cyberleagle.com/2021/11/the-draft-online-safety-bill-systemic.html" target="_blank">The draft Online Safety Bill: systemic or content-focused?</a></span></p>
<p class="MsoNormal"><span style="font-family: georgia;"><b>November 2021</b>: <o:p></o:p>“It is not immediately obvious why the government has set so much store by the claimed systemic nature of the safety duties. Perhaps it thinks that by seeking to distance Ofcom from individual content decisions it can avoid accusations of state censorship. If so, that ignores the fact that service providers, via their safety duties, are proxies for the regulator. The effect of the legislation on individual items of user content is no less concrete because service providers are required to make decisions under the supervision of Ofcom, rather than if Ofcom were wielding the blue pencil, the muffler or the content warning generator itself.” <a href="https://www.cyberleagle.com/2021/11/the-draft-online-safety-bill-systemic.html" target="_blank">The draft Online Safety Bill: systemic or content-focused?</a></span></p>
<p class="MsoNormal"><span style="font-family: georgia;"><b>November 2021</b>:<o:p></o:p> "Notwithstanding its abstract framing, the impact of the draft Bill ... would be on individual items of content posted by users. But how can we evaluate that impact where legislation is calculatedly abstract, and before any of the detail is painted in? We have to concretise the draft Bill’s abstractions: test them against a hypothetical scenario and deduce (if we can) what might result." <a href="https://www.cyberleagle.com/2021/11/the-draft-online-safety-bill-concretised.html" target="_blank">The draft Online Safety Bill concretised</a></span></p>
<p class="MsoNormal"><span style="font-family: georgia;"><b>November 2022</b>. <o:p></o:p>“From a proportionality perspective, it has to be remembered that friction-increasing proposals typically strike at all kinds of content: illegal, harmful, legal and beneficial.” <a href="https://www.cyberleagle.com/2022/11/test-your-knowledge-of-online-safety.html" target="_blank">How well do you know the Online Safety Bill?</a></span></p></details>
<details><summary><b><span style="font-variant-alternates: normal; font-variant-caps: small-caps; font-variant-east-asian: normal; font-variant-numeric: normal;"><span style="font-family: georgia;">Platforms adjudging
illegality<o:p></o:p></span></span></b></summary>
<span style="font-family: georgia;"><br /><b>State of Play</b> The Bill’s illegality duties are mapped out in the U2U and search engine diagrams in the opening section. The Bill imposes both reactive and proactive duties on providers. The proactive duties require platforms to take measures to prevent users encountering illegal content, encompassing the use of automated detection and removal systems. It a platform becomes aware of illegal content it must swiftly remove it.<br /><br />In the present iteration of the Bill the platform (or its automated systems) must treat content as illegal if it has reasonable grounds to infer, on the basis of all information reasonably available to it, that the content is illegal. That is stipulated in Clause 170, which was introduced in July 2022 as New Clause 14. A backbench Lords amendment would raise the threshold to manifest illegality. </span>
<p class="MsoNormal"><span style="font-family: georgia;"><b>June 2019</b>: “In some kinds of case … illegality will be manifest. For most categories it will not be, for any number of reasons. The alleged illegality may be debatable as a matter of law. It may depend on context, including factual matters outside the knowledge of the intermediary. The relevant facts may be disputed. There may be available defences, including perhaps public interest. Illegality may depend on the intention or knowledge of one of the parties. And so it goes on. … <br /><br />If there were to be any kind of positive duty to remove illegal material of which an intermediary becomes aware, it is unclear why that should go beyond material which is manifestly illegal on the face of it. If a duty were to go beyond that, consideration should be given to restricting it to specific offences that either impinge on personal safety (properly so called) or, for sound reasons, are regarded as sufficiently serious to warrant a separate positive duty which has the potential to contravene the presumption against prior restraint.” <a href="https://www.cyberleagle.com/2019/06/speech-is-not-tripping-hazard-response.html" target="_blank">Speech is not a tripping hazard</a></span></p>
<p class="MsoNormal"><span style="font-family: georgia;"><b>February 2020</b>: <o:p></o:p>"legality is rarely a question of inspecting an item of content alone without an understanding of the factual context. A court assesses evidence according to a standard of proof: balance of probabilities for civil liability, beyond reasonable doubt for criminal. Would the same process apply to the duty of care? Or would the mere potential for illegality trigger the ‘unlawfulness’ duty of care, with its accompanying obligation to remove user content? Over two years after the Internet Safety Green Paper, and the best part of a year after the White Paper, the consultation response contains no indication that the government recognises the existence of this issue, let alone has started to grapple with it." <a href="https://www.cyberleagle.com/2020/02/online-harms-deconstructed-initial.html" target="_blank">Online Harms Deconstructed - the Initial Consultation Response</a></span></p>
<p class="MsoNormal"><span style="font-family: georgia;"><b>February 2022</b>: “It may seem obvious that illegal content should be removed, but that overlooks the fact that the draft Bill would require removal without any independent adjudication of illegality. That contradicts the presumption against prior restraint that forms a core part of traditional procedural protections for freedom of expression. <br /><br />… The draft Bill provides that the illegality duty should be triggered by ‘reasonable grounds to believe’ that the content is illegal. It could have adopted a much higher threshold: manifestly illegal on the face of the content, for instance. The lower the threshold, the greater the likelihood of legitimate content being removed at scale, whether proactively or reactively. <br /><br />The draft Bill raises serious (and already well-known, in the context of existing intermediary liability rules) concerns of likely over-removal through mandating platforms to detect, adjudge and remove illegal material on their systems. Those are exacerbated by adoption of the ‘reasonable grounds to believe’ threshold.” <a href="https://www.cyberleagle.com/2022/02/harm-version-40-online-safety-bill-in.html" target="_blank">Harm Version 4.0 - The Online Harms Bill in metamorphosis</a></span></p>
<p class="MsoNormal"><span style="font-family: georgia;"><b>March 2o22</b>: “The problem with the “reasonable grounds to believe” or similar threshold is that it expressly bakes in over-removal of lawful content. … <br /><br />This illustrates the underlying dilemma that arises with imposing removal duties on platforms: set the duty threshold low and over-removal of legal content is mandated. Set the trigger threshold at actual illegality and platforms are thrust into the role of judge, but without the legitimacy or contextual information necessary to perform the role; and certainly without the capability to perform it at scale, proactively and in real time.” <a href="https://www.cyberleagle.com/2022/03/mapping-online-safety-bill.html" target="_blank">Mapping the Online Safety Bill</a></span></p>
<p class="MsoNormal"><span style="font-family: georgia;"><b>March 2022</b>: “This analysis may suggest that for a proactive monitoring duty founded on illegality to be capable of compliance with the [ECHR] ‘prescribed by law’ requirement, it should be limited to offences the commission of which can be adjudged on the face of the user content without recourse to further information. <br /><br />Further, proportionality considerations may lead to the perhaps stricter conclusion that the illegality must be manifest on the face of the content without requiring the platform to make any independent assessment of the content in order to find it unlawful. … <br /><br />The [government’s ECHR] Memorandum does not address the arbitrariness identified above in relation to proactive illegality duties, stemming from an obligation to adjudge illegality in the legislated or inevitable practical absence of material facts. Such a vacuum cannot be filled by delegated powers, by an Ofcom code of practice, or by stipulating that the platform’s systems and processes must be proportionate.” <a href="https://www.cyberleagle.com/2022/03/mapping-online-safety-bill.html" target="_blank">Mapping the Online Safety Bill</a></span></p>
<p class="MsoNormal"><span style="font-family: georgia;"><b>May 2022</b>:<o:p></o:p> “For priority illegal content the Bill contemplates proactive monitoring, detection and removal technology operating in real time or near-real time. There is no obvious possibility for such technology to inform itself of extrinsic information about a post, such as might give rise to a defence of reasonable excuse, or which might shed light on the intention of the poster, or provide relevant external context.” <a href="https://bills.parliament.uk/publications/46665/documents/1879" target="_blank">Written evidence to Public Bill Committee</a></span></p>
<p class="MsoNormal"><span style="font-family: georgia;"><b>July 2022</b>: <o:p></o:p>"... especially for real-time proactive filtering providers are placed in the position of having to make illegality decisions on the basis of a relative paucity of information, often using automated technology. That tends to lead to arbitrary decision-making. Moreover, if the threshold for determining illegality is set low, large scale over-removal of legal content will be baked into providers’ removal obligations. But if the threshold is set high enough to avoid over-removal, much actually illegal content may escape. Such are the perils of requiring online intermediaries to act as detective, judge and bailiff." <a href="https://www.cyberleagle.com/2022/07/platforms-adjudging-illegality-online.html" target="_blank">Platforms adjudging illegality – the Online Safety Bill’s inference engine</a></span></p>
<p class="MsoNormal"><span style="font-family: georgia;"><b>July 2022</b> “In truth it is not so much NC14 itself that is deeply problematic, but the underlying assumption (which NC14 has now exposed) that service providers are necessarily in a position to determine illegality of user content, especially where real time automated filtering systems are concerned. … <br /><br />It bears emphasising that these issues around an illegality duty should have been obvious once an illegality duty of care was in mind: by the time of the April 2019 White Paper, if not before. Yet only now are they being given serious consideration.” <a href="https://www.cyberleagle.com/2022/07/platforms-adjudging-illegality-online.html" target="_blank">Platforms adjudging illegality – the Online Safety Bill’s inference engine</a></span></p>
<p class="MsoNormal"><span style="font-family: georgia;"><b>November 2022</b>: “The current version of the Bill sets ‘reasonable grounds to infer’ as the platform’s threshold for adjudging illegality. <br /><br />Moreover, unlike a court that comes to a decision after due consideration of all the available evidence on both sides, a platform will be required to make up its (or its algorithms') mind about illegality on the basis of whatever information is available to it, however incomplete that may be. For proactive monitoring of ‘priority offences’, that would be the user content processed by the platform’s automated filtering systems. The platform would also have to ignore the possibility of a defence unless they have reasonable grounds to infer that one may be successfully relied upon. <br /><br />The mischief of a low threshold is that legitimate speech will inevitably be suppressed at scale under the banner of stamping out illegality.” <a href="https://www.cyberleagle.com/2022/11/test-your-knowledge-of-online-safety.html" target="_blank">How well do you know the Online Safety Bill?</a></span></p>
<p class="MsoNormal"><span style="font-family: georgia;"><b>January 2023</b>: <o:p></o:p>"If anything graphically illustrates the perilous waters into which we venture when we require online intermediaries to pass judgment on the legality of user-generated content, it is the government’s decision to add S.24 of the Immigration Act 1971 to the Online Safety Bill’s list of “priority illegal content”: user content that platforms must detect and remove proactively, not just by reacting to notifications." <a href="https://www.cyberleagle.com/2023/01/positive-light-or-fog-in-channel.html" target="_blank">Positive light or fog in the Channel?</a></span> </p>
<p class="MsoNormal"><span style="font-family: georgia;"><b>January 2023</b>: <o:p></o:p>“False positives are inevitable with any moderation system - all the more so if automated filtering systems are deployed and are required to act on incomplete information (albeit Ofcom is constrained to some extent by considerations of accuracy, effectiveness and lack of bias in its ability to recommend proactive technology in its Codes of Practice). Moreover, since the dividing line drawn by the Bill is not actual illegality but reasonable grounds to infer illegality, the Bill necessarily deems some false positives to be true positives.” <a href="https://www.cyberleagle.com/2023/01/positive-light-or-fog-in-channel.html" target="_blank">Positive light or fog in the Channel?</a></span> </p>
<p class="MsoNormal"><span style="font-family: georgia;"><b>January 2023</b>: “These problems with the Bill’s illegality duties are not restricted to migrant boat videos or immigration offences… . They are of general application and are symptomatic of a flawed assumption at the heart of the Bill: that it is a simple matter to ascertain illegality just by looking at what the user has posted. There will be some offences for which this is possible (child abuse images being the most obvious), and other instances where the intent of the poster is clear. But for the most part that will not be the case, and the task required of platforms will inevitably descend into guesswork and arbitrariness: to the detriment of users and their right of freedom of expression. <br /><br />It is strongly arguable that if an illegality duty is to be placed on platforms at all, the threshold for illegality assessment should not be ‘reasonable grounds to infer’, but clearly or manifestly illegal. Indeed, that may be what compatibility with the Article 10 right of freedom of expression requires.” <a href="https://www.cyberleagle.com/2023/01/positive-light-or-fog-in-channel.html" target="_blank">Positive light or fog in the Channel?</a></span> </p></details>
<details><summary><b><span style="font-variant-alternates: normal; font-variant-caps: small-caps; font-variant-east-asian: normal; font-variant-numeric: normal;"><span style="font-family: georgia;">Freedom of
expression and Prior Restraint<o:p></o:p></span></span></b></summary>
<p class="MsoNormal"><span style="font-family: georgia;"><b>State of Play</b>: The debate on the effect of the Bill of freedom of expression is perhaps the most polarised of all: the government contending that the Bill sets out to secure freedom of expression in various ways, its critics maintaining that the Bill's duties on service providers will inevitably damage freedom of expression through suppression of legitimate user content. Placing stronger freedom of expression duties on platforms when carrying out their safety duties may be thought to highlight the Bill's deep internal contradictions. </span></p><p class="MsoNormal"><span style="font-family: georgia;"><b>October 2018</b>: <o:p></o:p>“We derive from the right of freedom of speech a set of principles that collide with the kind of actions that duties of care might require, such as monitoring and pre-emptive removal of content. The precautionary principle may have a place in preventing harm such as pollution, but when applied to speech it translates directly into prior restraint. The presumption against prior restraint refers not just to pre-publication censorship, but the principle that speech should stay available to the public until the merits of a complaint have been adjudicated by a legally competent independent tribunal. The fact that we are dealing with the internet does not negate the value of procedural protections for speech.” <a href="https://www.cyberleagle.com/2018/10/a-lord-chamberlain-for-internet-thanks.html" target="_blank">A Lord Chamberlain for the internet? Thanks, but no thanks</a></span></p>
<p class="MsoNormal"><span style="font-family: georgia;"><b>October 2018</b>: <o:p></o:p>"US district judge Dalzell said in 1996: “As the most participatory form of mass speech yet developed, the internet deserves the highest protection from governmental intrusion”. The opposite view now seems to be gaining ground: that we individuals are not to be trusted with the power of public speech, it was a mistake ever to allow anyone to speak or write online without the moderating influence of an editor, and by hook or by crook the internet genie must be stuffed back in its bottle." <a href="https://www.cyberleagle.com/2018/10/a-lord-chamberlain-for-internet-thanks.html" target="_blank">A Lord Chamberlain for the internet? Thanks, but no thanks</a></span></p>
<p class="MsoNormal"><span style="font-family: georgia;"><b>June 2019</b>: <o:p></o:p>“If it be said that mere facilitation of users’ individual public speech is sufficient to justify control via a preventive duty of care placed on intermediaries, that proposition should be squarely confronted. It would be tantamount to asserting that individual speech is to be regarded by default as a harm to be mitigated, rather than as the fundamental right of human beings in a free society. As such the proposition would represent an existential challenge to the right of individual freedom of speech.” <a href="https://www.cyberleagle.com/2019/06/speech-is-not-tripping-hazard-response.html" target="_blank">Speech is not a tripping hazard</a></span></p>
<p class="MsoNormal"><span style="font-family: georgia;"><b>June 2019</b>: <o:p></o:p>“The duty of care would…, since the emphasis is on prevention rather than action after the event, create an inherent conflict with the presumption against prior restraint, a long standing principle designed to provide procedural protection for freedom of expression.” <a href="https://www.cyberleagle.com/2019/06/speech-is-not-tripping-hazard-response.html" target="_blank">Speech is not a tripping hazard</a></span></p>
<p class="MsoNormal"><span style="font-family: georgia;"><b>Feb 2020</b>: <o:p></o:p>"People like to say that freedom of speech is not freedom of reach, but that is just a slogan. If the state interferes with the means by which speech is disseminated or amplified, it engages the right of freedom of expression. Confiscating a speaker’s megaphone at a political rally is an obvious example. ... Seizing a printing press is not exempted from interference because the publisher has the alternative of handwriting. Freedom of speech is not just freedom to whisper." <a href="https://www.cyberleagle.com/2020/02/online-harms-ifaq.html" target="_blank">Online Harms IFAQ</a></span></p>
<p class="MsoNormal"><span style="font-family: georgia;"><b>Feb 2020</b>: “… increasingly the coercive powers of the state are regarded as the means of securing freedom of expression rather than as a threat to it. So Carnegie questions whether removing a retweet facility is really a violation of users' rights to formulate their own opinion and express their views, or rather - to the contrary - a mechanism to support those rights by slowing them down so that they can better appreciate content, especially as regards onward sharing. <br /><br />The danger with conceptualising fundamental rights as a collection of virtuous swords jostling for position in the state’s armoury is that we lose focus on their core role as a set of shields creating a defensive line against the excesses and abuse of state power.” <a href="https://www.cyberleagle.com/2020/02/online-harms-ifaq.html" target="_blank">Online Harms IFAQ</a></span></p>
<p class="MsoNormal"><span style="font-family: georgia;"><b>June 2020</b>: <o:p></o:p>"The French Constitutional Council decision is a salutary reminder that fundamental rights issues are not the sole preserve of free speech purists, nor mere legal pedantry to be brushed aside in the eagerness to do something about the internet and social media." <a href="https://www.cyberleagle.com/2020/06/online-harms-and-legality-principle.html" target="_blank">Online Harms and the Legality Principle</a></span></p>
<p class="MsoNormal"><span style="font-family: georgia;"><b>June 2020</b>: “10 things that Article 19 of the Universal Declaration of Human Rights doesn’t say” (<a href="https://twitter.com/cyberleagle/status/1273365495824224258?s=20" target="_blank">Twitter thread</a> – now 18 things.) Sample:<br /></span></p><blockquote style="border: none; margin: 0px 0px 0px 40px; padding: 0px;"><p class="MsoNormal" style="text-align: left;"><span style="font-family: georgia;">“6. Everyone has the right to seek, receive and impart information and ideas through any media, always excepting the internet and social media.”</span></p></blockquote>
<p class="MsoNormal"><span style="font-family: georgia;"><b>May 2021</b>: "… the danger inherent in the legislation: that efforts to comply with the duties imposed by the legislation would carry a risk of collateral damage by over-removal. That is true not only of ‘legal but harmful’ duties, but also of the moderation and filtering duties in relation to illegal content that would be imposed on all providers. <br /><br />No obligation to conduct a freedom of expression risk assessment could remove the risk of collateral damage by over-removal. That smacks of faith in the existence of a tech magic wand. Moreover, it does not reflect the uncertainty and subjective judgement inherent in evaluating user content, however great the resources thrown at it. <br /><br />Internal conflicts between duties... sit at the heart of the draft Bill. For that reason, despite the government’s protestations to the contrary, the draft Bill will inevitably continue to attract criticism as ... a censor’s charter." <a href="https://www.cyberleagle.com/2021/05/harm-version-30-draft-online-safety-bill.html" target="_blank">Harm Version 3.0: the draft Online Safety Bill</a></span></p>
<p class="MsoNormal"><span style="font-family: georgia;"><b>June 2021</b>: <o:p></o:p>"Beneath the surface of the draft Bill lurks a foundational challenge. Its underlying premise is that speech is potentially dangerous, and those that facilitate it must take precautionary steps to mitigate the danger. That is the antithesis of the traditional principle that, within boundaries set by clear and precise laws, we are free to speak as we wish. The mainstream press may comfort themselves that this novel approach to speech is (for the moment) being applied only to the evil internet and to the unedited individual speech of social media users; but it is an unwelcome concept to see take root if you have spent centuries arguing that freedom of expression is not a fundamental risk, but a fundamental right." <a href="https://www.cyberleagle.com/2021/06/carved-out-or-carved-up-online-safety.html" target="_blank">Carved out or carved up? The draft Online Safety Bill and the press</a></span></p>
<p class="MsoNormal"><span style="font-family: georgia;"><b>June 2021</b>: “[D]iscussions of freedom of expression tend to resemble convoys of ships passing in the night. If, by the right of freedom of expression, Alice means that she should be able to speak without fear of being visited with state coercion; Bob means a space in which the state guarantees, by threat of coercion to the owner of the space, that he can speak; Carol contends that in such a space she cannot enjoy a fully realised right of freedom of expression unless the state forcibly excludes Dan’s repugnant views; and Ted says that irrespective of the state, Alice and Bob and Carol and Dan all directly engage each other’s fundamental right of freedom of expression when they speak to each other; then not only will there be little commonality of approach amongst the four, but the fact that they are talking about fundamentally different kinds of rights is liable to be buried beneath the single term, freedom of expression. <br /><br />If Grace adds that since we should not tolerate those who are intolerant of others’ views the state should – under the banner of upholding freedom of expression – act against intolerant speech, the circle of confusion is complete.” <a href="https://www.cyberleagle.com/2021/06/speech-vs-speech.html" target="_blank">Speech vs. Speech</a></span></p>
<p class="MsoNormal"><span style="font-family: georgia;"><b>November 2021</b>: “A systemic [safety] duty would relate to systems and processes that for whatever reason are to be treated as intrinsically risky. <br /><br />The question that then arises is what activities are to be regarded as inherently risky. It is one thing to argue that, for instance, some algorithmic systems may create risks of various kinds. It is quite another to suggest that that is true of any kind of U2U platform, even a simple discussion forum. If the underlying assumption of a systemic duty of care is that providing a facility in which individuals can speak to the world is an inherently risky activity, that (it might be thought) upends the presumption in favour of speech embodied in the fundamental right of freedom of expression.” <a href="https://www.cyberleagle.com/2021/11/the-draft-online-safety-bill-systemic.html" target="_blank">The draft Online Safety Bill: systemic or content-focused?</a></span></p>
<p class="MsoNormal"><span style="font-family: georgia;"><b>March 2022</b>: <o:p></o:p>"It may seem like overwrought hyperbole to suggest that the Bill lays waste to several hundred years of fundamental procedural protections for speech. But consider that the presumption against prior restraint appeared in Blackstone’s Commentaries (1769). It endures today in human rights law. That presumption is overturned by legal duties that require proactive monitoring and removal before an independent tribunal has made any determination of illegality. It is not an answer to say, as the government is inclined to do, that the duties imposed on providers are about systems and processes rather than individual items of content. For the user whose tweet or post is removed, flagged, labelled, throttled, capped or otherwise interfered with as a result of a duty imposed by this legislation, it is only ever about individual items of content." <a href="https://www.cyberleagle.com/2022/03/mapping-online-safety-bill.html" target="_blank">Mapping the Online Safety Bill</a></span></p>
<p class="MsoNormal"><span style="font-family: georgia;"><b>March 2023</b>: <o:p></o:p>"In a few months’ time three years will have passed since the French Constitutional Council struck down the core provisions of the Loi Avia ... the decision makes uncomfortable reading for some core aspects of the Online Safety Bill." <a href="https://www.cyberleagle.com/2023/03/five-lessons-from-loi-avia.html" target="_blank">Five lessons from the Loi Avia</a></span></p></details>
<details><summary><b><span style="font-variant-alternates: normal; font-variant-caps: small-caps; font-variant-east-asian: normal; font-variant-numeric: normal;"><span style="font-family: georgia;">Rule of law<o:p></o:p></span></span></b></summary>
<p class="MsoNormal"><span style="font-family: georgia;"><b>State of play</b> Once the decision was made to enact a framework designed to give flexibility to a regulator (Ofcom), rule of law concerns around certainty and foreseeability of content rules and decisions were bound to come to the fore. These issues are part and parcel of the government's decided policy approach.</span></p><p class="MsoNormal"><span style="font-family: georgia;"><b>March 2019</b>: <o:p></o:p>“Close scrutiny of any proposed social media duty of care from a rule of law perspective can help ensure that we make good law for bad people rather than bad law for good people." <a href="https://www.cyberleagle.com/2019/03/a-ten-point-rule-of-law-test-for-social.html" target="_blank">A Ten Point Rule of Law Test for a Social Media Duty of Care</a></span></p>
<p class="MsoNormal"><span style="font-family: georgia;"><b>June 2019</b>: <o:p></o:p>“The White Paper, although framed as regulation of platforms, concerns individual speech. The platforms would act as the co-opted proxies of the state in regulating the speech of users. Certainty is a particular concern with a law that has consequences for individuals' speech. In the context of an online duty of care the rule of law requires that users must be able to know with reasonable certainty in advance what speech is liable to be the subject of preventive or mitigating action by a platform operator operating under the duty of care.” <a href="https://www.cyberleagle.com/2019/06/speech-is-not-tripping-hazard-response.html" target="_blank">Speech is not a tripping hazard</a></span></p>
<p class="MsoNormal"><span style="font-family: georgia;"><b>May 2020</b>: <o:p></o:p>"If you can't articulate a clear and certain rule about speech, you don't get to make a rule at all." <a href="https://t.co/hQ6HbfuovK?amp=1" target="_blank">Disinformation and Online Harms</a></span></p>
<p class="MsoNormal"><span style="font-family: georgia;"><b>June 2020</b>: <o:p></o:p>“The proposed Online Harms legislation falls squarely within [the legality] principle, since internet users are liable to have their posts, tweets, online reviews and every other kind of public or semi-public communication interfered with by the platform to which they are posting, as a result of the duty of care to which the platform would be subject. Users, under the principle of legality, must be able to able to foresee, with reasonable certainty, whether the intermediary would be legally obliged to interfere with what they are about to say online.” <a href="https://www.cyberleagle.com/2020/06/online-harms-and-legality-principle.html" target="_blank">Online Harms and the Legality Principle</a></span></p>
<p class="MsoNormal"><span style="font-family: georgia;"><b>September 2020</b>: <o:p></o:p>“If we are to gain an understanding of where the lines would be drawn, we cannot shelter behind comfortable abstractions. We have to grasp the nettle of concrete examples, however uncomfortable that may be. That is important from the perspective not only of the intermediary, but of the user. From a rule of law standpoint, it is imperative that the user should be able to predict, in advance, with reasonable certainty, whether what they wish to say is likely to be affected by the actions of an intermediary seeking to discharge its duty of care.” <a href="https://drive.google.com/file/d/1e4vSClZWin0wyG6PK68lzH7DtvjVfuZv/view" target="_blank">Submission to Ofcom Call for Evidence</a></span></p>
<p class="MsoNormal"><span style="font-family: georgia;"><b>September 2020</b>: <o:p></o:p>“…the purpose of these examples is less about what the answer is in any given case (although that is of course important in terms of whether the line is being drawn in the right place), but more about whether we are able to predict the answer in advance. If a legal framework does not enable us to predict clearly, in advance, what the answer is in each case, then there is no line and the framework falls at the first rule of law hurdle of “prescribed by law”. It is not sufficient to make ad hoc pronouncements about what the answer is in each case, or to invoke high level principles. We have to know why the answer is what it is, expressed in terms that enable us to predict with confidence the answer in other concrete cases.” <a href="https://drive.google.com/file/d/1e4vSClZWin0wyG6PK68lzH7DtvjVfuZv/view" target="_blank">Submission to Ofcom Call for Evidence</a></span></p>
<p class="MsoNormal"><span style="font-family: georgia;"><b>August 2022</b>: <o:p></o:p>“The principled way to address speech considered to be beyond the pale is for Parliament to make clear, certain, objective rules about it – whether that be a criminal offence, civil liability on the user, or a self-standing rule that a platform is required to apply. Drawing a clear line, however, requires Parliament to give careful consideration not only to what should be caught by the rule, but to what kind of speech should not be caught, even if it may not be fit for a vicar’s tea party. Otherwise it draws no line, is not a rule and fails the rule of law test: that legislation should be drawn so as to enable anyone to foresee, with reasonable certainty, the consequences of their proposed action.” <a href="https://www.cyberleagle.com/2022/08/reimagining-online-safety-bill.html" target="_blank">Reimagining the Online Safety Bill</a></span></p></details>
<details><summary><b><span style="font-variant-alternates: normal; font-variant-caps: small-caps; font-variant-east-asian: normal; font-variant-numeric: normal;"><span style="font-family: georgia;">Regulation by
regulator<o:p></o:p></span></span></b></summary>
<p class="MsoNormal"><span style="font-family: georgia;"><b>State of Play</b> A regulatory model akin to broadcast-style regulation by regulator has been part of the government's settled approach from the start. Changing that would require a rethink of the Bill. </span></p><p class="MsoNormal"><span style="font-family: georgia;"><b>June 2018</b>: “The choice is not between regulating or not regulating. If there is a binary choice (and there are often many shades in between) it is between settled laws of general application and fluctuating rules devised and applied by administrative agencies or regulatory bodies; it is between laws that expose particular activities, such as search or hosting, to greater or less liability; or laws that visit them with more or less onerous obligations; it is between regimes that pay more or less regard to fundamental rights; and it is between prioritising perpetrators or intermediaries. <br /><br />Such niceties can be trampled underfoot in the rush to do something about the internet. Existing generally applicable laws are readily overlooked amid the clamour to tame the internet Wild West, purge illegal, harmful and unacceptable content, leave no safe spaces for malefactors and bring order to the lawless internet. … We would at our peril confer the title and powers of Governor of the Internet on a politician, civil servant, government agency or regulator.” <a href="https://www.cyberleagle.com/2018/06/regulating-internet-intermediaries-to.html" target="_blank">Regulating the internet – intermediaries to perpetrators</a></span></p>
<p class="MsoNormal"><span style="font-family: georgia;"><b>October 2018</b>: <o:p></o:p>"[W]hen regulation by regulator trespasses into the territory of speech it takes on a different cast. Discretion, flexibility and nimbleness are vices, not virtues, where rules governing speech are concerned. The rule of law demands that a law governing speech be general in the sense that it applies to all, but precise about what it prohibits. Regulation by regulator is the converse: targeted at a specific group, but laying down only broadly stated goals that the regulator should seek to achieve. <a href="https://www.cyberleagle.com/2018/10/a-lord-chamberlain-for-internet-thanks.html" target="_blank">A Lord Chamberlain for the internet? Thanks, but no thanks</a></span></p>
<p class="MsoNormal"><span style="font-family: georgia;"><b>October 2018</b>: "It is hard not to think that an internet regulator would be a politically expedient means of avoiding hard questions about how the law should apply to people’s behaviour on the internet. Shifting the problem on to the desk of an Ofnet might look like a convenient solution. It would certainly enable a government to proclaim to the electorate that it had done something about the internet. But that would cast aside many years of principled recognition that individual speech should be governed by the rule of law, not the hand of a regulator. <br /><br />If we want safety, we should look to the general law to keep us safe. Safe from the unlawful things that people do offline and online. And safe from a Lord Chamberlain of the Internet." <a href="https://www.cyberleagle.com/2018/10/a-lord-chamberlain-for-internet-thanks.html" target="_blank">A Lord Chamberlain for the internet? Thanks, but no thanks</a></span></p>
<p class="MsoNormal"><span style="font-family: georgia;"><b>March 2019</b>: <o:p></o:p>"...the regulator is not an alchemist. It may be able to produce ad hoc and subjective applications of vague precepts, and even to frame them as rules, but the moving hand of the regulator cannot transmute base metal into gold. Its very raison d'etre is flexibility, discretionary power and nimbleness. Those are a vice, not a virtue, where the rule of law is concerned, particularly when freedom of individual speech is at stake.” <a href="https://www.cyberleagle.com/2019/03/a-ten-point-rule-of-law-test-for-social.html" target="_blank">A Ten Point Rule of Law Test for a Social Media Duty of Care</a></span></p>
<p class="MsoNormal"><span style="font-family: georgia;"><b>May 2019</b>: <o:p></o:p>“Individual speech is different. What is a permissible regulatory model for broadcast is not necessarily justifiable for individuals, as was recognised in the US Communications Decency Act case (<i>Reno v ACLU</i>) in the early 1990s. … In these times it is hardly fashionable, outside the USA, to cite First Amendment jurisprudence. Nevertheless, the proposition that individual speech is not broadcast should carry weight in a constitutional or human rights court in any jurisdiction.” <a href="https://www.cyberleagle.com/2019/05/the-rule-of-law-and-online-harms-white.html" target="_blank">The Rule of Law and the Online Harms White Paper</a></span></p>
<p class="MsoNormal"><span style="font-family: georgia;"><b>June 2019</b>: <o:p></o:p>“A Facebook, Twitter or Mumsnet user is not an invited audience member on a daytime TV show, but someone exercising their freedom to speak to the world within clearly defined boundaries set by the law. A policy initiative to address behaviour online should take that principle as its starting point and respect and work within it. The White Paper does not do so. It cannot be assumed that an acceptable mode of regulation for broadcast is appropriate for individual speech. The norm in the offline world is that individual speech should be governed by general laws, not by a discretionary regulator.” <a href="https://www.cyberleagle.com/2019/06/speech-is-not-tripping-hazard-response.html" target="_blank">Speech is not a tripping hazard</a></span></p>
<p class="MsoNormal"><span style="font-family: georgia;"><b>February 2020</b>: “Consider the days when unregulated theatres were reckoned to be a danger to society and the Lord Chamberlain censored plays. That power was abolished in 1968, to great rejoicing. The theatres were liberated. They could be as rude and controversial as they liked, short of provoking a breach of the peace. <br /><br />The White Paper proposes a Lord Chamberlain for the internet. Granted, it would be an independent regulator, similar to Ofcom, not a royal official. It might even be Ofcom itself. But the essence is the same. And this time the target would not be a handful of playwrights out to shock and offend, but all of us who use the internet.” <a href="https://www.cyberleagle.com/2020/02/online-harms-ifaq.html" target="_blank">Online Harms IFAQ</a></span></p>
<p class="MsoNormal"><span style="font-family: georgia;"><b>June 2020</b>: <o:p></o:p>“Broadcast-style regulation is the exception, not the norm. In domestic UK legislation it has never been thought appropriate, either offline or online, to subject individual speech to the control of a broadcast-style discretionary regulator. That is true for the internet as in any other medium.” <a href="https://www.cyberleagle.com/2020/06/online-harms-revisited.html" target="_blank">Online Harms Revisited</a></span></p>
<p class="MsoNormal"><b><span style="font-variant-alternates: normal; font-variant-caps: small-caps; font-variant-east-asian: normal; font-variant-numeric: normal;"><span style="font-family: georgia;">Analogy wars<o:p></o:p></span></span></b></p>
<p class="MsoNormal"><span style="font-family: georgia;"><b>October 2018</b>: <o:p></o:p>"Setting regulatory standards for content means imposing more restrictive rules than the general law. That is the regulator’s raison d’etre. But the notion that a stricter standard is a higher standard is problematic when applied to what we say. Consider the frequency with which environmental metaphors – toxic speech, polluted discourse – are now applied to online speech. For an environmental regulator, cleaner may well be better. The same is not true of speech." <a href="https://www.cyberleagle.com/2018/10/a-lord-chamberlain-for-internet-thanks.html" target="_blank">A Lord Chamberlain for the internet? Thanks, but no thanks</a></span></p>
<p class="MsoNormal"><span style="font-family: georgia;"><b>October 2018</b>: “[N]o analogy is perfect. Although some overlap exists with the safety-related dangers (personal injury and damage to property) that form the subject matter of occupiers’ liability to visitors and of corresponding common law duties of care, many online harms are of other kinds. Moreover, it is significant that the duty of care would consist in preventing behaviour of one site visitor to another. <br /><br />The analogy with public physical places suggests that caution is required in postulating duties of care that differ markedly from those, both statutory and common law, that arise from the offline occupier-visitor relationship.” <a href="https://www.cyberleagle.com/2018/10/take-care-with-that-social-media-duty.html" target="_blank">Take care with that social media duty of care</a></span></p>
<p class="MsoNormal"><span style="font-family: georgia;"><b>May 2021</b>: <o:p></o:p>“Welcome to the Online Regulation Analogy Collection: speech as everything that it isn't (and certainly not as the freedom that underpins all other freedoms)” (<a href="https://twitter.com/cyberleagle/status/1389194755939217411?s=20" target="_blank">Twitter thread</a>)</span></p></details>
<details><summary><span style="font-family: georgia;"><b><span style="font-variant-alternates: normal; font-variant-caps: small-caps; font-variant-east-asian: normal; font-variant-numeric: normal;">What’s illegal
offline is illegal online</span></b><span style="font-variant-alternates: normal; font-variant-caps: small-caps; font-variant-east-asian: normal; font-variant-numeric: normal;"><o:p></o:p></span></span></summary>
<p class="MsoNormal"><span style="font-family: georgia;"><b>State of Play</b> Amongst all the narratives that have infused the Online Harms debate, the mantra of online-offline equivalence has been one of the longest-running. </span></p><p class="MsoNormal"><span style="font-family: georgia;"><b>February 2022</b>: "Overall, the government has pursued its quest for online safety under the Duty of Care banner, bolstered with the slogan “What Is Illegal Offline Is Illegal Online”. <br /><br />That slogan, to be blunt, has no relevance to the draft Bill. Thirty years ago there may have been laws that referred to paper, post, or in some other way excluded electronic communication and online activity. Those gaps were plugged long ago. With the exception of election material imprints (a gap that is being fixed by a different Bill currently going through Parliament), there are no criminal offences that do not already apply online (other than jokey examples like driving a car without a licence). <br /><br />On the contrary, the draft Bill’s Duty of Care would create novel obligations for both illegal and legal content that have no comparable counterpart offline. The arguments for these duties rest in reality on the premise that the internet and social media are different from offline, not that we are trying to achieve offline-online equivalence. " <a href="https://www.cyberleagle.com/2022/02/harm-version-40-online-safety-bill-in.html" target="_blank">Harm Version 4.0 - The Online Harms Bill in metamorphosis</a></span></p>
<p class="MsoNormal"><span style="font-family: georgia;"><b>December 2022</b>: “DCMS’s social media infographics once more proclaim that ‘What is illegal offline is illegal online’. <br /><br />The underlying message of the slogan is that the Bill brings online and offline legality into alignment. Would that also mean that what is <i>legal</i> offline is (or should be) <i>legal</i> online? The newest Culture Secretary Michelle Donelan appeared to endorse that when announcing the abandonment of ‘legal but harmful to adults’: "However admirable the goal, I do not believe that it is morally right to censor speech online that is legal to say in person." <br /><br />Commendable sentiments, but does the Bill live up to them? Or does it go further and make illegal online some of what is legal offline? I suggest that in several respects it does do that." <a href="https://www.cyberleagle.com/2022/12/some-of-what-is-legal-offline-is.html" target="_blank">(Some of) what is legal offline is illegal online</a></span></p></details>
<details><summary><b><span style="font-variant-alternates: normal; font-variant-caps: small-caps; font-variant-east-asian: normal; font-variant-numeric: normal;"><span style="font-family: georgia;">End-to-End
Encryption<o:p></o:p></span></span></b></summary>
<p class="MsoNormal"><span style="font-family: georgia;"><b>State of Play</b> The issue of end to end encryption, and the allied Ofcom power to require messaging platforms to deploy CSEA scantech, has been a slow burner. It will feature in Lords amendments. </span></p><p class="MsoNormal"><span style="font-family: georgia;"><b>June 2019</b>: <o:p></o:p>“What would prevent the regulator from requiring an in-scope private messaging service to remove end-to-end encryption? This is a highly sensitive topic which was the subject of considerable Parliamentary debate during the passage of the Investigatory Powers Bill. It is unsuited to be delegated to the discretion of a regulator.” <a href="https://www.cyberleagle.com/2019/06/speech-is-not-tripping-hazard-response.html" target="_blank">Speech is not a tripping hazard</a></span></p>
<p class="MsoNormal"><span style="font-family: georgia;"><b>May 2020</b>: <o:p></o:p>"This is the first indication that the government is alive to the possibility that a regulator might be able to interpret a duty of care so as to affect the ability of an intermediary to use end to end encryption." <a href="https://www.cyberleagle.com/2020/05/a-tale-of-two-committees.html" target="_blank">A Tale of Two Committees</a></span></p>
<p class="MsoNormal"><span style="font-family: georgia;"><b>November 2022</b>: <o:p></o:p>“Ofcom will be given the power to issue a notice requiring a private messaging service to use accredited technology to scan for CSEA material. A recent government amendment to the Bill provides that a provider given such a notice has to make such changes to the design or operation of the service as are necessary for the technology to be used effectively. That opens the way to requiring E2E encryption to be modified if it is incompatible with the accredited technology - which might, for instance, involve client-side scanning. Ofcom can also require providers to use best endeavours develop or source their own scanning technology.” <a href="https://www.cyberleagle.com/2022/11/test-your-knowledge-of-online-safety.html" target="_blank">How well do you know the Online Safety Bill?</a></span></p></details>
<details><summary><b><span style="font-variant-alternates: normal; font-variant-caps: small-caps; font-variant-east-asian: normal; font-variant-numeric: normal;"><span style="font-family: georgia;">New offences<o:p></o:p></span></span></b></summary>
<p class="MsoNormal"><span style="font-family: georgia;"><b>State of Play</b> The Bill introduces several new offences that could be committed by users. The proposal to enact a new harmful communications offence was dropped after well-founded criticism, but leaving the notorious S.127(1) Communications Act in place. The government is expected to introduce more offences. </span></p><p class="MsoNormal"><span style="font-family: georgia;">A backbench Lords amendment seeks to add the new false and threatening communications offences to the list of priority illegal content that platforms would have to proactively seek out and remove.</span></p><p class="MsoNormal"><span style="font-family: georgia;"><b>March 2022</b>: “The threatening communications offence ought to be uncontroversial. However, the Bill adopts different wording from the Law Commission’s recommendation. That focused on threatening a particular victim (the ‘object of the threat’, in the Law Commission’s language). The Bill’s formulation may broaden the offence to include something more akin to use of threatening language that might be encountered by anyone who, upon reading the message, could fear that the threat would be carried out (whether or not against them). <br /><br />It is unclear whether this is an accident of drafting or intentional widening. The Law Commission emphasised that the offence should encompass only genuine threats: “In our view, requiring that the defendant intend or be reckless as to whether the victim of the threat would fear that the defendant would carry out the threat will ensure that only “genuine” threats will be within the scope of the offence.” (emphasis added) It was on this basis that the Law Commission considered that another Twitter Joke Trial scenario would not be a concern.” <a href="https://www.cyberleagle.com/2022/03/mapping-online-safety-bill.html" target="_blank">Mapping the Online Safety Bill</a></span></p>
<p class="MsoNormal"><span style="font-family: georgia;"><b>February 2023</b>: “Why has the government used different
language from the Law Commission's recommendation for the threatening
communications offence? The concern is that the government’s rewording broadens
the offence beyond the genuine threats that the Law Commission intended should
be captured. The spectre of the Twitter Joke Trial hovers in the wings.” (<a href="https://twitter.com/cyberleagle/status/1628669102733623296?s=20">Twitter
thread</a>)<o:p></o:p></span></p></details>
<details><summary><span style="font-family: georgia;"><span style="font-variant-caps: small-caps;"><b>Extraterritoriality</b></span></span></summary>
<p class="MsoNormal"><span style="font-family: georgia;"><b>State of Play</b> The territorial reach of the Bill has attracted relatively little attention. As a matter of principle territorial overreach is to be deprecated, not least because it encourages similar lack of jurisdictional self-restraint on the part of other countries. </span></p><p class="MsoNormal"><span style="font-family: georgia;"><b>December 2020</b>: <o:p></o:p>“For the first time, the Final Response has set out the proposed territorial reach of the proposed legislation. Somewhat surprisingly, it appears to propose that services should be subject to UK law on a ‘mere availability of content’ basis. Given the default cross-border nature of the internet, this is tantamount to legislating extraterritorially for the whole world. It would follow that any provider anywhere in the rest of the world would have to geo-fence its service to exclude the UK in order to avoid engaging UK law. Legislating on a mere availability basis has been the subject of criticism over many years since the advent of the internet.” <a href="https://www.cyberleagle.com/2020/12/the-online-harms-edifice-takes-shape.html" target="_blank">The Online Harms edifice takes shape</a></span></p>
<p class="MsoNormal"><span style="font-family: georgia;"><b>March 2022</b>: “The Bill maintains the previous enthusiasm of the draft Bill to legislate for the whole world. <br /><br />The safety duties adopt substantially the same expansive definition of ‘UK-linked’ as previously: (a) a significant number of UK users; or (b) UK users form one of the target markets for the service (or the only market); or (c) there are reasonable grounds to believe that there is a material risk of significant harm to individuals in the UK presented by user-generated content or search content, as appropriate for the service. Whilst a targeting test is a reasonable way of capturing services provided to UK users from abroad, the third limb verges on ‘mere accessibility’. That suggests jurisdictional overreach. As to the first limb, the Bill says nothing about how ‘significant’ should be evaluated.” <a href="https://www.cyberleagle.com/2022/03/mapping-online-safety-bill.html" target="_blank">Mapping the Online Safety Bill</a></span></p></details>
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiwzfVKoP11UbtN936ShTnd64vY_nm7ZiZ7k8vpojnULBAqeRNDWcqOR-XJXqywPCQbNDRpwvPHo2fN1txRtFJSTv-Y4caex9RNi6gHM8O2RgVlyyg_Cbb8q-tJbHQUvegqkUxc9J9f4Ldv5ycm6bIn8r7lG3XkXovni94Q5nv8VT-DamoEqdZsMv0WSw/s135/snip2.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="134" data-original-width="135" height="134" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiwzfVKoP11UbtN936ShTnd64vY_nm7ZiZ7k8vpojnULBAqeRNDWcqOR-XJXqywPCQbNDRpwvPHo2fN1txRtFJSTv-Y4caex9RNi6gHM8O2RgVlyyg_Cbb8q-tJbHQUvegqkUxc9J9f4Ldv5ycm6bIn8r7lG3XkXovni94Q5nv8VT-DamoEqdZsMv0WSw/s1600/snip2.png" width="135" /></a></div><br /><p class="MsoNormal"><br /></p>Cyberleaglehttp://www.blogger.com/profile/17507190182464072147noreply@blogger.com0tag:blogger.com,1999:blog-229721367671779922.post-42244274773866471532023-03-11T12:38:00.001+00:002023-03-11T15:14:23.204+00:00Five lessons from the Loi Avia<p class="MsoNormal"><span style="font-family: georgia;">In a few months’ time three years will have passed since the
French Constitutional Council <a href="https://www.conseil-constitutionnel.fr/decision/2020/2020801DC.htm" target="_blank">struck down</a> the core provisions of the Loi Avia -
France’s equivalent of the German NetzDG law – for incompatibility with
fundamental rights. Although the controversy over the Loi Avia has passed into internet history, the Constitutional Council's decision provides some
instructive comparisons when we examine the UK’s <a href="https://bills.parliament.uk/bills/3137">Online Safety Bill</a>.</span></p><p class="MsoNormal"><span style="font-family: georgia;"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">As the Bill awaits
its House of Lords Committee debates, this is an opportune moment to cast our
minds back to the Loi Avia decision and see what lessons it may hold. Caution
is necessary in extrapolating from judgments on fundamental rights, since they are highly fact-specific; and when they do lay down principles they tend to leave cavernous room for future interpretation. Nevertheless,
the Loi Avia decision makes
uncomfortable reading for some core aspects of the Online Safety Bill.<o:p></o:p></span></p>
<p class="MsoNormal"><b><span style="font-family: georgia;">Background<o:p></o:p></span></b></p>
<p class="MsoNormal"><span style="font-family: georgia;">The key features of
the Loi Avia were</span>: </p><p class="MsoNormal"></p><ul style="text-align: left;"><li>F<span style="font-family: georgia; text-indent: -18pt;">or illegal CSEA and
terrorism content, one hour removal of content notified to an in-scope
publisher or host by the administrative authority, on pain of one year’s
imprisonment and a 250,000 euro fine.</span></li></ul><p></p>
<blockquote style="border: none; margin: 0px 0px 0px 40px; padding: 0px;"><p class="MsoNormal" style="text-align: left;"><span style="font-family: georgia;">The Constitutional
Council’s objection was founded on the determination of illegality being at the
sole discretion of the administrative authority. This provision has no direct
parallel in the Online Safety Bill. However, similar considerations could come
into play should an Ofcom Code of Practice recommend giving state agencies some
kind of trusted flagger status.</span></p></blockquote><p class="MsoNormal"></p><ul style="text-align: left;"><li><span style="font-family: georgia; text-indent: -18pt;">For content
contravening specified hate-related, genocide-related, sexual harassment and
child pornography laws, 24-hour removal of manifestly illegal content following
notification by any person to an in-scope platform operator, under penalty of a
fine of 250,000 euros.</span></li></ul><p></p>
<blockquote style="border: none; margin: 0px 0px 0px 40px; padding: 0px;"><p class="MsoNormal" style="text-align: left;"><span style="font-family: georgia;">The Online Safety
Bill analogue is a reactive ‘swift take down’ duty on becoming aware of
in-scope illegal content. Unlike the Loi Avia, the Bill also imposes proactive
prevention duties. </span></p></blockquote>
<p class="MsoNormal"><span style="font-family: georgia;">The Online Safety
Bill imposes duties for both illegal content and legal content harmful to
children. Since the Loi Avia concerned only illegal content, the Constitutional
Council did not have to consider obligations relating to ‘legal but harmful’
content of any kind, whether for adults or children. <o:p></o:p></span></p>
<p class="MsoNormal"><b><span style="font-family: georgia;">Lesson 1: The rule
of law comes first<o:p></o:p></span></b></p>
<p class="MsoNormal"><span style="font-family: georgia;">The tests that the
Constitutional Council applied to the Loi Avia – legality, necessity and
proportionality – are components of the European Convention on Human Rights,
with which the Online Safety Bill must comply. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Along the obstacle
course of human rights compatibility, the first hurdle is legality: known in
the ECHR as the “prescribed by law” test. In short, a law must have the quality
of law to qualify as law. If the law does not enable someone to foresee with
reasonable certainty whether their proposed conduct is liable to be affected as
a consequence of the law, it falls at that first hurdle. If legislation will result in
arbitrary or capricious decisions - for example </span><span style="font-family: georgia;">through vagueness or grant of excessive discretion -</span><span style="font-family: georgia;"> it lacks the essential quality of law.</span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The problem with vagueness
was spelt out by the House of Lords in <i>R v Rimmington</i>, citing the US
case of <i>Grayned</i>:<o:p></o:p></span></p>
<p class="MsoNormal" style="margin-left: 36pt;"><span style="font-family: georgia;">"Vagueness
offends several important values … A vague law impermissibly delegates basic
policy matters to policemen, judges and juries for resolution on an ad hoc and
subjective basis, with the attendant dangers of arbitrary and discriminatory
application."<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Whilst most often
applied to criminal liability, the legality objection has also been described
as a constitutional principle that underpins the rule of law generally. Lord
Diplock referred to it in a 1975 civil case (<i>Black-Clawson</i>):<o:p></o:p></span></p>
<p class="MsoNormal" style="margin-left: 36pt;"><span style="font-family: georgia;">"The
acceptance of the rule of law as a constitutional principle requires that a
citizen, before committing himself to any course of action, should be able to
know in advance what are the legal consequences that will flow from it."<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The French Constitutional
Council held that the Loi Avia failed the legality test in one respect. The Loi
provided that the intentional element of the offence of failure to remove
content notified by any person could arise from absence of a “proportionate and
necessary examination of the notified content”. The Constitutional Council
found that if this was intended to provide a defence for platform operators, it
was not drafted in terms that allowed its scope to be determined. In other
words, a defence (if that is what it was) of having carried out a proportionate
and necessary examination was too vague to pass the legality test. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The Online Safety
Bill differs from the Loi Avia. It does not impose criminal liability on a
platform for failure to take down a particular item of user content. Enforcement by the appointed regulator, Ofcom,
is aimed at systematic failures to fulfil duties rather than at individual content
decisions. Nevertheless, the Bill is liberally sprinkled with references to
proportionality </span><span style="font-family: georgia;">– similar language to that which the French Constitutional Council held was too vague</span><span style="font-family: georgia;">. It typically couches platform and search engine duties as an obligation
to use proportionate systems and processes designed to achieve a stipulated result.</span></p>
<p class="MsoNormal"><span style="font-family: georgia;">It is open to question
whether compliance with the legality principle can be achieved simply by
inserting ‘proportionate’ into a broadly stated legal duty, instead of grasping
the nettle of articulating a more concrete obligation that would enable the
proportionality of the interference with fundamental rights to be assessed by a
court. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The government’s
<a href="https://www.gov.uk/government/publications/online-safety-bill-supporting-documents/online-safety-bill-european-convention-on-human-rights-memorandum" target="_blank">ECHR Memorandum</a> seeks to head off any objection along these lines by stressing
the higher degree of certainty that it expects would be achieved when Ofcom’s
Codes of Practice have been laid before Parliament and come into effect. Even if that does the trick, it is <a href="https://www.cyberleagle.com/2018/10/a-lord-chamberlain-for-internet-thanks.html">another matter</a> whether
it is desirable to grant that amount of discretion over individual speech to a
regulator such as Ofcom.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">For the Online
Safety Bill the main relevance of the legality hurdle is to the freedom of
expression rights of individual users. Can a user foresee with reasonable
certainly whether their proposed communication is liable to be affected as a
result of a platform or search engine seeking to fulfil a safety duty imposed
by the legislation? The Bill requires those online intermediaries to play detective,
judge and bailiff. Interpolation of an online intermediary into the process of
adjudging and sanctioning user content is capable of introducing <a href="https://www.cyberleagle.com/2023/01/positive-light-or-fog-in-channel.html" target="_blank">arbitrariness that is not present when the same offence is prosecuted through the courts</a>, with their
attendant due process protections. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">In the case of the
Online Safety Bill, arbitrariness is a real prospect. That is largely because of
the kinds of offences on which platforms and search engines are required to
adjudicate, the limited information available to them, and the <a href="https://www.cyberleagle.com/2022/07/platforms-adjudging-illegality-online.html" target="_blank">standard to which they have to be satisfied that the user content is illegal</a>. <o:p></o:p></span></p>
<p class="MsoNormal"><b><span style="font-family: georgia;">Lesson 2: Beyond ‘manifestly
illegal’<o:p></o:p></span></b></p>
<p class="MsoNormal"><span style="font-family: georgia;">An intriguing
feature of the Constitutional Council decision is that although the Loi Avia prescribed, on the face of it, a high threshold for removal of illegal content –
manifest illegality </span><span style="font-family: georgia;">–</span><span style="font-family: georgia;"> that was not enough to save the legislation from
unconstitutionality. ‘Manifestly
illegal’ is a more stringent test than the ‘<a href="https://www.cyberleagle.com/2022/07/platforms-adjudging-illegality-online.html" target="_blank">reasonable grounds to infer</a>’ threshold
prescribed by the Online Safety Bill.</span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The Loi Avia
required removal of manifestly illegal user content within 24 hours of
receiving from anyone a notification which gave the notifier’s identity, the
location of the content, and which specified the legal grounds on which the
content was said to be manifestly illegal. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The Constitutional
Council observed that the legislation required the operator to examine all
content reported to it, however numerous the reports, so as not to risk being
penalised. Moreover, once reported the platform had to consider not only the
specific grounds on which the content was reported, but all offences within the
scope of the legislation – even though some might present legal technicalities
or call for an assessment of context. These issues were especially significant
in the light of the 24 hour removal deadline and the criminal penalty for each
failure to withdraw.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">In the Constitutional
Council’s view the consequence of these provisions, taking into account also the
absence of any clearly specified defence to liability, was that operators could
only be encouraged to withdraw content reported to them, whether or not it was
manifestly illegal. That was not necessary, appropriate or proportionate and so
was unconstitutional. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The Online Safety
Bill does not prescribe specific time limits, but requires swift removal of
user content upon the platform becoming aware of in-scope illegality. As with
the Loi Avia, that applies to all in-scope offences.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The touchstone for
assessment of illegality under the Bill is reasonable grounds to infer
illegality, on the basis of all information reasonably available to the
platform. Unless that threshold is surmounted, the platform does not have to
remove it. If it is surmounted, the platform must do so swiftly. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">At least in the case
of automated proactive monitoring and filtering, the available information will
be minimal – the users’ posts themselves and whatever the system knows about the relevant users. As a consequence, the
decisions required to be made for many kinds of offence – especially those
dependent on context - will inevitably be arbitrary. Moreover, a platform has to
ignore the possibility of a defence unless it has something from which it can
infer on reasonable grounds that a defence may succeed.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Whilst the Online
Safety Bill lacks the Loi Avia’s chilling sword of Damocles of short prescriptive
deadlines and automatic criminal liability for failure to remove, the reason why
those factors (among others) were legally significant was their effect on the freedom
of expression of users: the likely over-removal of lawful user content. The Online
Safety Bill’s lower threshold for adjudging illegality, combined with the
requirement to make those judgments in a relative information vacuum - often at
scale and speed - does more than just encourage takedown of legal user content:
it requires it. <o:p></o:p></span></p>
<p class="MsoNormal"><b><span style="font-family: georgia;">Lesson 3 –The lens
of prior restraint<o:p></o:p></span></b></p>
<p class="MsoNormal"><span style="font-family: georgia;">The briefly glimpsed
elephant in the room of the Loi Avia decision is <a href="https://bills.parliament.uk/publications/46665/documents/1879" target="_blank">prior restraint</a>. The Constitutional
Council alluded to it when it remarked that the removal obligations were not
subject to the prior intervention of a judge or subject to any other condition.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Legislation
requiring a platform summarily to adjudge the legality of individual items of
user content at speed and at scale bears the hallmarks of prior restraint:
removal prior to full adjudication on the merits after argument and evidence. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Prior restraint is
not impermissible. It does require the most stringent scrutiny and
circumscription, in which the risk of removal of legal content will loom large.
The ECtHR in <i><a href="https://hudoc.echr.coe.int/eng#{%22itemid%22:[%22001-115705%22]}" target="_blank">Yildirim</a></i> considered an interim court order blocking Google
Sites. It characterised that as a prior
restraint, and observed: “the dangers inherent in prior restraints are such
that they call for the most careful scrutiny on the part of the Court”. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The ECtHR in <i><a href="https://hudoc.echr.coe.int/eng#{%22itemid%22:[%22001-119244%22]}" target="_blank">Animal Defenders v UK</a></i> distinguished a prior restraint imposed on an individual
act of expression from general measures: in that case a ban on broadcasting
political advertising. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">If an individual item is removed ultimately pursuant to a general measure, that does not prevent the action being characterised as a prior restraint. If it did, the doctrine could not
be applied to courts issuing interim injunctions. The fact that the Online
Safety Bill does not penalise a platform for getting an individual decision
wrong does not disguise the fact that the required task is to make judgments
about individual items of user content constituting individual acts of
expression. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The appropriateness of
categorising at least proactive detection and filtering obligations as a form
of prior restraint is reinforced by the CJEU decision in <i><a href="https://curia.europa.eu/juris/document/document.jsf?text=&docid=258261&pageIndex=0&doclang=EN&mode=lst&dir=&occ=first&part=1&cid=1920736" target="_blank">Poland v The European Parliament and Council</a></i>, which applied <i>Yildirim</i> to those
kinds of provisions in the context of copyright. <o:p></o:p></span></p>
<p class="MsoNormal"><b><span style="font-family: georgia;">Lesson 4 – Context,
context, context<o:p></o:p></span></b></p>
<p class="MsoNormal"><span style="font-family: georgia;">The Constitutional Council
pointed out the need to assess context for some offences. That is all the more
significant for the Online Safety Bill, for several reasons.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">First, unlike the
Loi Avia the Online Safety Bill imposes proactive, not just reactive, duties.
That will multiply the volume of user content to be assessed, in many cases
requiring the deployment of automated content monitoring. Such systems, by
their very nature, can be aware only of content flowing through the system and
not of any external context. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Second, the Bill
requires illegality assessments to be made ignoring external contextual
information unless it is reasonably available to the platform.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Third, defences such
as reasonableness will often be inherently contextual. The Bill, however,
enables the intermediary to take account of the possibility of a defence only
if it has information on the basis of which it can infer that a defence may
successfully be relied upon. <o:p></o:p></span></p>
<p class="MsoNormal"><b><span style="font-family: georgia;">Lesson 5 – Proactive
duties<o:p></o:p></span></b></p>
<p class="MsoNormal"><span style="font-family: georgia;">The Loi Avia
decision was about reactive duties based on notification. Proactive illegality duties
present inherently greater human rights challenges. A less prescriptive, less
draconian reactive regime, combined with a ‘manifest illegality’ standard and
greater due process safeguards, might possibly have survived. But if the
starting point is aversion to a regime that encourages takedown of legal
user content, it is difficult to see how a regime that carries a certainty of
over-takedown, as do the Online Safety Bill’s proactive illegality duties, could
pass muster. <o:p></o:p></span></p>
<p class="MsoNormal"><b><span style="font-family: georgia;">What is to be done? <o:p></o:p></span></b></p>
<p class="MsoNormal"><span style="font-family: georgia;"><o:p></o:p></span></p><p><span style="font-family: georgia;"><a href="https://www.cyberleagle.com/2022/08/reimagining-online-safety-bill.html" target="_blank">Raising the Online Safety Bill’s standard of assessment</a> from reasonable grounds to infer to
manifest illegality would go some way towards a better prospect of human rights compliance. But
that still leaves the problem of the assessment having to be made in ignorance
of external context; and the problem of the possibility of a defence being discounted unless it is
apparent from the information flowing through the system. Those more
intractable issues put in question the kinds of offences that platforms and search engines could
be called upon to adjudge. </span></p><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg5T1TPEfmjG0dJBuBreTgWtr6n5F0dGzQoUOHNAWHNqkUO1SOS8V21bjNLcwOfxXaPz9idW7QzgE-z8mAXBfYHSrMWmU1M37Cw-aIs6I10SMdKqnrfOWm7Y9RaxXRc8NDRcVxXXNvNJoMR34pHYuG-x6Hmqp-qhVc0uKMu8Nd1nXdxrQqp1uUzwrA2gQ/s135/snip2.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="134" data-original-width="135" height="134" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg5T1TPEfmjG0dJBuBreTgWtr6n5F0dGzQoUOHNAWHNqkUO1SOS8V21bjNLcwOfxXaPz9idW7QzgE-z8mAXBfYHSrMWmU1M37Cw-aIs6I10SMdKqnrfOWm7Y9RaxXRc8NDRcVxXXNvNJoMR34pHYuG-x6Hmqp-qhVc0uKMu8Nd1nXdxrQqp1uUzwrA2gQ/s1600/snip2.png" width="135" /></a></div><br /><span style="font-family: georgia;"><br /></span><p></p>Cyberleaglehttp://www.blogger.com/profile/17507190182464072147noreply@blogger.com0tag:blogger.com,1999:blog-229721367671779922.post-59382922311138943552023-01-24T12:37:00.001+00:002023-01-24T19:30:55.343+00:00Positive light or fog in the Channel?<span style="font-family: georgia;">If anything graphically illustrates the perilous waters into which we venture when we require online intermediaries to pass judgment on the legality of user-generated content, it is the government’s decision to add S.24 of the Immigration Act 1971 to the Online Safety Bill’s list of “priority illegal content”: user content that platforms must detect and remove proactively, not just by reacting to notifications. Proactive measures could involve scouring the platform for content already uploaded, filtering and blocking at the point of attempted upload, or both. <br /><br />The political target of the Bill amendment, which the government says it will introduce in the House of Lords, is videos of migrants crossing the Channel in boats. The Secretary of State explained it thus:<br /><blockquote>“We will also add Section 24 of the Immigration Act 1971 to the priority offences list in Schedule 7. Although the offences in Section 24 cannot be carried out online, paragraph 33 of the Schedule states that priority illegal content includes the inchoate offences relating to the offences that are listed. Therefore aiding, abetting, counselling, conspiring etc those offences by posting videos of people crossing the channel which show that activity in a positive light could be an offence that is committed online and therefore falls within what is priority illegal content. The result of this amendment would therefore be that platforms would have to proactively remove that content.” </blockquote>We have to assume that this wheeze was dreamed up in some haste, meeting the immediate political imperative to respond to a strongly supported back bench amendment that tried to tack videos of boat crossings on to the Bill’s children’s duties. Now that the dust has settled, at least temporarily, let us take a look at what would be involved in applying the government's proposal. <br /><br />In view of some of the media commentary, it is worth emphasising that the proposed amendment to the Bill would not create a new offence. It is based on existing accessory liability legislation, which platforms (and indeed search engines) would have to apply proactively. <br /><br /><b>In a positive light</b></span><div><span style="font-family: georgia;"><br /></span><div><span style="font-family: georgia;">Where does ‘in a positive light’ come from? Presumably the Secretary of State must have had in mind that if a video shows the activity of crossing the Channel to gain illegal entry to the UK in a negative light – thus tending to deter the activity - that cannot amount to counselling (in modern language, encouraging) an offence of entering (or attempting to enter) the UK illegally. So far so good. But that does not mean we should jump to the conclusion that ‘in a positive light’ is sufficient to amount to encouragement. <br /><br />The offence of aiding, abetting, counselling etc a Section 24 offence applies not only to videos but to any kind of communication, whether on social media, simple discussion forums, websites or elsewhere. <br /><br />You do not have to go far to find studies suggesting that illegal immigration can have positive benefits to an economy. Does supporting that position in an online discussion about UK immigration put the activity of illegal entry to the UK in a positive light? Quite possibly. Does it (in the legal sense) encourage an offence of illegal entry to the UK? Surely not. That is a far cry from intentionally encouraging a prospective illegal migrant to commit an illegal entry offence. <br /><br />The idea that someone might be prosecuted for voicing that kind of opinion in a general online discussion is (one would hope) absurd. It brings to mind the comment of Lord Scott in <i>Rusbridger v Attorney-General</i>, a case about the moribund Section 3 of the Treason Felony Act 1848:<br /><blockquote>“[Y]ou do not have to be a very good lawyer to know that to advocate the abolition of the monarchy and its replacement by a republic by peaceful and constitutional means will lead neither to prosecution nor to conviction. All you need to be is a lawyer with commonsense.” </blockquote>In any event legislation must, so far as it is possible to do so, be read and given effect in a way which is compatible with the European Convention on Human Rights right of freedom of expression (S.3 Human Rights Act 1998; albeit the Bill of Rights Bill would repeal that provision). <br /><br />The Secretary of State’s proposal has <a href="https://www.theguardian.com/commentisfree/2023/jan/19/ministers-ban-videos-channel-crossings-small-boats" target="_blank">reportedly</a> sparked fears among humanitarian organisations of consequences if they share footage that may call into question the policing of Channel crossings. The Home Office, for its part, has <a href="https://www.theguardian.com/commentisfree/2023/jan/19/ministers-ban-videos-channel-crossings-small-boats" target="_blank">said</a> that they would not be penalised. That is an understandable view if all the legal elements of an encouragement offence are properly taken into account.<br /><br />Nevertheless, it is not so far-fetched a notion that an online platform, tasked by the Online Safety Bill proactively to detect and remove user content that encourages an illegal entry offence, might consider itself duty-bound to remove content that in actual fact would not result in prosecution or a conviction in court. There are specific reasons for this under the Bill, which contrast with prosecution through the courts.<br /><br /><b>Prosecution versus the Bill's illegality duties</b></span></div><div><span style="font-family: georgia;"><br />First the platform’s removal duty under the Bill kicks in not if the user’s content is illegal beyond reasonable doubt, or manifestly illegal, but if the platform has ‘reasonable grounds to infer’ illegality – on the face of it a significantly lower standard. Whether this standard is compatible with Article 10 of the European Convention on Human Rights is questionable, but nevertheless it is what the Bill says. The Bill would inevitably require platforms to remove some content that is in fact legal.<br /><br />Second, the Bill requires platforms to act on all the information reasonably available to the platform: a far more limited factual basis than a court. At least for an automated system that would be likely to be the content of the post and any related information on the platform (such as information indicating the nature and identity of the poster). It excludes any extrinsic contextual information not reasonably available to the platform. <br /><br />Further, the platform can take into account the possibility of a defence only if it has reasonable grounds to infer that one may successfully be relied upon. For many defences (such as reasonable excuse) any grounds for a defence will not necessarily be apparent from the information available to the platform, in which case the possibility of a defence must be ignored. </span></div><div><span style="font-family: georgia;"><br /></span></div><div><span style="font-family: georgia;">The platform’s assessment of illegality may thus depend on the happenstance of whether there is anything in the post itself, or its surrounding data, that points to the possibility of a successful defence. For some widely drawn offences intent and available defences are the most significant elements in determining legality, and are integral to the balance drawn by the legislature. This, we shall see, is of particular relevance to the encouragement and assistance offences under the Serious Crime Act 2007. <br /><br />Third, the task of a platform is not to second-guess whether the authorities would prosecute, but to decide whether it has reasonable grounds to infer that the content falls within the letter of the law. Whilst the Bill makes numerous references to proportionality, that does not affect the basis on which the platform must determine illegality. That is a binary, yes or no assessment. There is no obvious room for a platform to conclude that something is only a little bit illegal, or to decide that, once detected, some content crossing the ‘reasonable grounds to infer’ threshold could be left up. Certainly the political expectation is that any detected illegal content will be removed. <br /><br />If that is right, the assessment that platforms are required to make under the Bill lacks the anything akin to the ameliorating effect of prosecutorial discretion on the rough edges of the criminal law. Conversely to build such discretion, even principles-based, into the decision-making required of platforms would hardly be a solution either, especially not at the scale and speed implied by automated proactive detection and removal obligations. We do not want platforms to be arbiters of truth, but to ask them (or their automated systems) to be judges of the public interest or of the seriousness of offending would be a recipe for guesswork and arbitrariness, even under the guidance of Ofcom. <br /><br />If this seems like a double bind, it is. It reflects a fundamental flaw in the Bill’s duty of care approach: the criminal law was designed to be operated within the context of the procedural protections provided by the legal system, and to be adjudged by courts on established facts after due deliberation; not to be the subject of summary justice dispensed on the basis of incomplete information by platforms and their automated systems tasked with undertaking proactive detection. <br /><br />Fourth, we shall see that in some cases the task required of the platform appears to involve projection into the future on hypothetical facts. Courts are loath to assess future criminal illegality on a hypothetical basis. Their task at trial is to determine whether the events that are proved in fact to have occurred amounted to an offence. <br /><br />Fifth, inaccuracy. False positives are inevitable with any moderation system - all the more so if automated filtering systems are deployed and are required to act on incomplete information (albeit Ofcom is constrained to some extent by considerations of accuracy, effectiveness and lack of bias in its ability to recommend proactive technology in its Codes of Practice). Moreover, since the dividing line drawn by the Bill is not actual illegality but reasonable grounds to infer illegality, the Bill necessarily deems some false positives to be true positives. <br /><br />Sixth, the involvement of Ofcom. The platform would have the assistance of a Code of Practice issued by Ofcom. That would no doubt include a section describing the law on encouragement and assistance in the context of the S.24 1971 Act illegal entry offences, and would attempt to draw some lines to guide the platform’s decisions about whether it had reasonable grounds to infer illegality. <br /><br />An Ofcom Code of Practice would carry substantial legal and practical weight. That is because the Bill provides that taking the measures recommended in a Code of Practice is deemed to fulfil the platform’s duties under the Bill. Much would therefore rest on Ofcom’s view of the law of encouragement and assistance and what would constitute reasonable grounds to draw an inference of illegality in various factual scenarios.<br /><br />Seventh, the involvement of the Secretary of State. Ofcom might consider whether to adopt the Secretary of State’s ‘in a positive light’ interpretation. As the Bill currently stands, if the Secretary of State did not approve of Ofcom’s recommendation for public policy reasons s/he could send the draft Code of Practice back to Ofcom to with a direction to modify – and, it seems, keep on doing so until s/he was happy with its contents. <br /><br />Even if that controversial power of direction were removed from the Bill, Ofcom would still have significant day to day power to adopt interpretations of the law and apply them to platforms’ decision-making (albeit Ofcom’s interpretations would in principle be open to challenge by judicial review). <br /><br />As against those seven points, in fulfilling its duties under the Bill a platform is required to have particular regard to the importance of protecting users’ right to freedom of expression within the law. ‘Within the law’ might suggest that the duty has minimal relevance to the illegality duties, especially when clause 170 sets out expressly how platforms are to determine illegality. It provides that if the reasonable grounds to infer test is satisfied, the platform <i>must</i> treat the content as illegal. <br /><br />The government’s ECHR Memorandum suggests that the ‘have particular regard’ duty may have some effect on illegality determination, but it does not explain how it does so in the face of the express provisions of clause 170. It also inaccurately paraphrases clause 18 by omitting ‘within the law’:<br /><blockquote>“34. Under clause 18, all in-scope service providers are required to have regard to the importance of protecting freedom of expression when deciding on and implementing their safety policies and procedures. This will include assessments as to whether content is illegal or of a certain type and how to fulfil its duties in relation to such content. Clause 170 makes clear that providers are not required to treat content as illegal content (i.e. to remove it from their service) unless they have reasonable grounds to infer that all elements of a relevant offence are made out. They must make that inference on the basis of all relevant information reasonably available to them.” </blockquote>That is all by way of lengthy preliminary. Now let us delve into how a platform might be required to go about assessing the legality of a Channel dinghy video under the Accessories and Abettors Act 1861, then for the companion encouragement and assistance offences under the Serious Crime Act 2007. <br /><br />Let us assume that the Secretary of State is right: that posting a video of people crossing the Channel in dinghies, which shows that activity in a positive light, can in principle amount to encouraging an illegal entry offence. In the interests of simplicity, I will ignore the Secretary of State’s reference to conspiracy. How should a platform go about determining illegality? <br /><br />Spoiler alert: the process is more complicated and difficult than the Secretary of State’s pronouncement might suggest. And in case anyone is inclined to charge me with excessive legal pedantry, let us not forget that the task that the Bill expressly requires a platform to undertake is to apply the rules laid down in the Bill and in the relevant underlying offences. The task is not to take a rough and ready ‘that looks a bit dodgy, take it down’, or ‘<a href="https://www.dailymail.co.uk/news/article-9655757/Priti-Patel-blames-Facebook-Twitter-TikTok-soaring-number-migrants.html" target="_blank">the Home Secretary has complained about this content</a> so we’d better remove it’ approach. Whether what the Bill requires is at all realistic is another matter. <br /><br /><b>Aiding, abetting and counselling – the 1861 Act </b><br /><br />Aiding, abetting and counselling (the words used by the Secretary of State) is the language of the 1861 Act: “Whosever shall aid, abet, counsel or procure the commission of any indictable offence … shall be liable to be tried, indicted and punished as a principal offender.” <br /><br />One of the most significant features of accessory liability under the 1861 Act is that there can be no liability for aiding, abetting, counselling or procuring unless and until the principal offence has actually occurred. Whilst the aiding, abetting etc does not have to cause the principal offence that occurred, there has to be some connecting link with it. As Toulson LJ put it in <i>Stringer</i>:<br /><blockquote>“Whereas the provision of assistance need not involve communication between D and P, encouragement by its nature involves some form of transmission of the encouragement by words or conduct, whether directly or via an intermediary. An un-posted letter of encouragement would not be encouragement unless P chanced to discover it and read it. Similarly, it would be unreal to regard P as acting with the assistance or encouragement of D if the only encouragement took the form of words spoken by D out of P's earshot.” </blockquote><i>Timing</i> This gives rise to a timing problem for a platform tasked with assessing whether a video is illegal. For illegality to arise under the 1861 Act the video must in fact have been viewed by someone contemplating an illegal entry offence, the video would have to have encouraged them to enter the UK illegally, and they would have to have proceeded to do so (or attempt to do so). <br /><br />Absent those factual events having taken place, there can be no offence of aiding and abetting. The aiding and abetting offence would further require the person posting the video to have intended the person contemplating illegal entry to view the video and to have intended to encourage their actual subsequent actual or attempted illegal entry. <br /><br />Thus if a platform is assessing a video that is present on the platform, in order to adjudge the video to be illegal it would at a minimum have to consider how long it has been present on the platform. That is because there must be reasonable grounds to infer both that a prospective migrant has viewed it and that since doing so that person has already either entered the UK illegally or attempted to do so. Otherwise no principal offence has yet occurred and so no offence of aiding and abetting the principal offence can have been committed by posting the video. <br /><br />It may in any case be a nice question whether, in the absence of any evidence available to the platform that a prospective migrant has in fact viewed the video, the platform would have reasonable grounds to infer the existence of any of these facts. To do so would appear to involve making an assumption of someone viewing the video and of a connected illegal entry offence that the assumed viewing has in fact encouraged. <br /><br />For a post blocked by filtering at the point of upload (if that were considered feasible) the timing issue becomes a conundrum. Since no-one can have viewed a blocked video, none of the required subsequent events can possibly have occurred. Nor does the law provide any offence of attempting to aid and abet a 1971 Act offence. <br /><br />Thus at least for upload filtering it appears that either there is a conceptual bar to a platform determining that a video blocked at the point of upload amounts to aiding abetting, or the platform would (if the Bill permits it) have to engage in some legal time travel and assess illegality on a hypothetical future basis. <br /><br />A basis on which a platform could be required to assess such hypothetical illegality may be provided by Clause 53(14)(b) of the Bill, which in effect provides that illegal content includes content that would be illegal if it were present on the platform. </span></div><div><span style="font-family: georgia;"><br /></span></div><div><span style="font-family: georgia;">Even then, a video present on the platform only as a legal fiction cannot as a matter of fact be connected to any subsequent actual encouraged primary offence. Deemed presence would therefore have to be notionally extended for a sufficient period to hypothesise the factual events necessary for completion of the aiding and abetting offence: that a notional prospective migrant has hypothetically viewed the video present on the service, hypothetically been encouraged by the video to commit or attempt an illegal entry offence, and hypothetically then done so. <br /><br />Even if any of this hypothesising is permissible under the Bill, whether it could provide reasonable grounds to infer illegality is a matter for conjecture. The need to hypothesise the existence of an actual illegal entry offence would never arise in a prosecution in court, since for a prosecution of the accessory to succeed it must be proved that the principal offence has taken place. In court, therefore, the assessment of accessory liability will always be within the context of a known past set of facts that are proved to have amounted to an offence by a principal. <br /><br /><i>Intent</i> The platform would also have to consider whether it has reasonable grounds to infer that the poster had the necessary intention to aid, abet etc the actual or attempted offence. <br /><br />In court the prosecution would have to prove, beyond reasonable doubt, that the poster intended a viewer of the video to obtain or attempt illegal entry to the UK, the poster having knowledge of the facts that would and did render the principal’s conduct criminal. (‘Did’, because there can be no conviction for aiding and abetting unless the principal offence is proved to have taken place.) <br /><br />That would raise the question of whether generalised knowledge of the existence of people crossing the Channel who might view the video and be encouraged by it would be sufficient to satisfy the knowledge requirement, when the poster would have been unaware of the particular individual who had in fact viewed the video and then committed the offence. Whilst it might be legitimate to find intent where the video is specifically promoting illegal crossings to prospective migrants, such a finding would seem to be highly debatable if the video did not offer targeted encouragement, even if it portrayed such activities in a positive light. <br /><br />How should a platform decide whether the poster of the video had the requisite intent to constitute an aiding and abetting offence? The Bill requires the platform to apply the ‘reasonable grounds to infer’ test. It has to make that assessment on the basis of all the information reasonably available to it. That would likely bring in to account not only the content of the video, but any surrounding text in the post and (if apparent) the nature of the person posting. The intent of a video advertising illegal Channel crossings might be clear, the intent of a bare clip of a dinghy carrying migrants (even if it showed smiling occupants and was accompanied by upbeat music) not so much. <br /><br /><b>Serious Crime Act 2007 – encouraging and assisting </b><br /><br />We started by considering aiding and abetting under the 1861 Act because that is what the language used by the Secretary of State appeared to allude to. That is not, however, the end of the story. The Serious Crime Act 2007 enacted encouragement and assistance offences that, unlike aiding and abetting, do not depend on the principal offence actually taking place. They therefore avoid the time travel and hypothesising contortions involved in applying the Bill to the 1861 Act. <br /><br />Also unlike aiding and abetting, an attempt to commit an encouragement or assistance offence under the 2007 Act is itself an offence. In principle therefore, a foiled attempt to upload a video capable of constituting an encouragement or assistance offence under the 2007 Act could itself constitute an offence. <br /><br />By way of illustration, consider the simplest 2007 Act offence, S.44:<br /><blockquote> “(1) A person commits an offence if— <br /><br />(a) he does an act capable of encouraging or assisting the commission of an offence; and <br /><br />(b) he intends to encourage or assist its commission. <br /><br />(2) But he is not to be taken to have intended to encourage or assist the commission of an offence merely because such encouragement or assistance was a foreseeable consequence of his act.”</blockquote>So a platform tasked with adjudging whether the video is illegal would have to consider not only whether posting the video is ‘capable’ of encouraging the commission of an unlawful entry offence, but also whether the person who posted it intended to encourage the commission of the offence; bearing in mind that a mere foreseeable consequence does not count as intent. (That, it might be thought, rules out any but the most targeted advertising or promotional videos.) <br /><br />How should a platform go about these two tasks? As with the 1861 Act aiding and abetting offences, part of the answer lies in Clause 170 of the Bill, which specifies the standard of ‘reasonable grounds to infer’ based on ‘all information reasonably available’ to the platform. <br /><br />The analysis would be based on the same information as for aiding and abetting, but without the need to show (or hypothesise) that anyone actually viewed or acted upon the video. It is enough if publication of the video is capable of encouraging the offence. However, the express exclusion of a merely foreseeable consequence would limit the inference of intention that it is reasonable for the platform to draw. <br /><br /><i>Defence of reasonable conduct</i> Unlike for the 1861 Act aiding and abetting offence, the 2007 Act offences provide a defence of ‘reasonable conduct’. This comes in two different versions:<br /><br />(1) that the defendant knew that certain circumstances existed and that it was reasonable for him to act as he did in those circumstances; or<br /><br />(2) that he believed certain circumstances to exist, that his belief was reasonable, and that it was reasonable for him to act as he did in the circumstances as he believed them to be. <br /><br />Factors that the 2007 Act states have to be considered in relation to reasonableness include the seriousness of the offence and any purpose for which the defendant claims to have been acting. A 2007 Act defence will succeed in court if the defendant proves it on the balance of probabilities. <br /><br />The information on which the possibility of a reasonableness defence depends may well be extrinsic to the platform or its automated systems. The purpose for which a user has acted is something within the user’s knowledge and belief and may not be apparent from the post itself. <br /><br />As already mentioned, this is significant because the platform cannot consider the possibility of a defence unless, on the basis of all relevant information that is reasonably available to it, it has reasonable grounds to infer that a defence may be successfully relied upon (in the context of the 2007 Act defence: successful on the balance of probabilities). <br /><br />In determining what information is reasonably available to the provider, the following factors, in particular, are relevant: (a) the size and capacity of the provider, and (b) whether a judgement is made by human moderators, by means of automated systems or processes or by means of automated systems or processes together with human moderators. <br /><br />The probable net result, for an automated system, is that the possibility of a defence is to be ignored unless it is apparent from the information processed by the system. Yet for the 2007 Act encouragement and assistance offences, the defences are an integral element of the offence, designed to balance the potentially overreaching effects of inchoate liability founded on mere capability. <br /><br />In reality, however, it smacks of fantasy to imagine that a platform, whether employing automated systems, human moderators, or a combination of the two, would be capable of applying rules of this nuance and complexity, particularly in real or near real time. <br /><br /><b>The broader issue</b><br /><br />These problems with the Bill’s illegality duties are not restricted to migrant boat videos or immigration offences, although the Secretary of State’s statement has provided an unexpected opportunity to illustrate them. They are of general application and are symptomatic of a flawed assumption at the heart of the Bill: that it is a simple matter to ascertain illegality just by looking at what the user has posted. There will be some offences for which this is possible (child abuse images being the most obvious), and other instances where the intent of the poster is clear. But for the most part that will not be the case, and the task required of platforms will inevitably descend into guesswork and arbitrariness: to the detriment of users and their right of freedom of expression. <br /><br />It is strongly arguable that if an illegality duty is to be placed on platforms at all, the threshold for illegality assessment should not be ‘reasonable grounds to infer’, but clearly or manifestly illegal. Indeed, that <a href="https://bills.parliament.uk/publications/46665/documents/1879" target="_blank">may be</a> what compatibility with the Article 10 right of freedom of expression requires.</span></div></div><div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiUub208V-kfMDBUouc8kQnuCMextYj4UEvJDa7VIgftNSL4BfAD4cQo15_l6dsb7G82xzoX9S75-pMKlP4CbgqlQEkVoc0vq2r-SaufsGzsoI7X9CPFuN9BuFcLPFE_DYigNP316JnqAsMOzQSWm7CRyyAyinofI2qlgf4xPKRggJHOFSAgUbs-NokBQ/s135/snip2.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="134" data-original-width="135" height="134" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiUub208V-kfMDBUouc8kQnuCMextYj4UEvJDa7VIgftNSL4BfAD4cQo15_l6dsb7G82xzoX9S75-pMKlP4CbgqlQEkVoc0vq2r-SaufsGzsoI7X9CPFuN9BuFcLPFE_DYigNP316JnqAsMOzQSWm7CRyyAyinofI2qlgf4xPKRggJHOFSAgUbs-NokBQ/s1600/snip2.png" width="135" /></a></div><br /><span style="font-family: georgia;"><br /></span></div>Cyberleaglehttp://www.blogger.com/profile/17507190182464072147noreply@blogger.com0tag:blogger.com,1999:blog-229721367671779922.post-86300011267645487512023-01-06T08:59:00.000+00:002023-01-06T08:59:14.697+00:00Twenty questions about the Online Safety Bill<p><span style="font-family: georgia;">Before Christmas Culture Secretary Michelle Donelan <a href="https://www.gov.uk/government/news/put-your-online-safety-bill-questions-to-secretary-of-state-michelle-donelan" target="_blank">invited</a> members of the public to submit questions about the Online Safety Bill, which she will <a href="https://twitter.com/DCMS/status/1611015916753506304" target="_blank">sit down to answer</a> in the New Year. </span></p><p><span style="font-family: georgia;">Here are mine. </span></p><p><span style="font-family: georgia;">1.<span style="white-space: pre;"> </span>A volunteer who sets up and operates a Mastodon instance in their spare time appears to be the provider of a user-to-user service. Is that correct?</span></p><p><span style="font-family: georgia;">2.<span style="white-space: pre;"> </span>Alice runs a personal blog on a blogging platform and is able to decide which third party comments on her blogposts to accept or reject. Is Alice (subject to any Schedule 1 exemptions) the provider of a user-to-user service in relation to those third party comments?</span></p><p><span style="font-family: georgia;">3.<span style="white-space: pre;"> </span>Bob runs a blog on a blogging platform. He has multiple contributors, whom he selects. Is Bob the provider of a user-to-user service in relation to their contributions?</span></p><p><span style="font-family: georgia;">4.<span style="white-space: pre;"> </span>Is a collaborative software development platform the provider of a user-to-user service?</span></p><p><span style="font-family: georgia;">5.<span style="white-space: pre;"> </span>The exclusion from “regulated user-generated content” extends to comments on comments (Clause 49(6)). But a facility enabling free form ‘comments on comments’ appears to disapply the Sch 1 para 4 limited functionality user-to-user service exemption. Is that correct? If so, what is the rationale for the difference? Would, for example, a newspaper website with functionality that enabled free form ‘comments on comments’ therefore not enjoy exclusion from scope under Sch 1 para 4?</span></p><p><span style="font-family: georgia;">6.<span style="white-space: pre;"> </span>Does the Sch 1 para 4 limited functionality exemption apply to goods retailers’ own-product review sections? If so, does it achieve that when it refers only to content and not to the goods themselves?</span></p><p><span style="font-family: georgia;">7.<span style="white-space: pre;"> </span>Would a site that enables academics to upload papers, subject to prior review by the site operator, be a user-to-user service? </span></p><p><span style="font-family: georgia;">8.<span style="white-space: pre;"> </span>Cl 204(2)(e) appears to suggest that a multiplayer online game would be a user-to-user service by virtue of player interaction alone, whether or not there is an inter-player chat or similar facility. Is that right?</span></p><p><span style="font-family: georgia;">9.<span style="white-space: pre;"> </span>Carol sets up and operates a voluntary online neighbourhood watch forum for her locality. Would Carol be a provider of a user-to-user service? </span></p><p><span style="font-family: georgia;">10.<span style="white-space: pre;"> </span>Dan operates a blockchain node. Would Dan be a provider of a user-to-user service?</span></p><p><span style="font-family: georgia;"></span></p><p><span style="font-family: georgia;">11.<span style="white-space: pre;"> </span>Grace chairs a public meeting using a video platform. Grace has control over who can join the meeting. Would Grace be a provider of a user-to-user service in relation to that meeting?</span></p><p><span style="font-family: georgia;">12.<span style="white-space: pre;"> </span>The threshold that the Bill requires a platform to apply when determining criminal illegality is ‘reasonable grounds to infer’. The criminal standard of proof is ‘beyond reasonable doubt’. Would not the Bill’s lower threshold inevitably require removal (at least for proactive obligations) of content that is in fact legal? For automated real time systems would that not occur at scale?</span></p><p><span style="font-family: georgia;">13.<span style="white-space: pre;"> </span>The Bill requires a platform to adjudge illegality on the basis of all relevant information reasonably available to it. Particularly for proactive automated processes, that will be limited to what users have posted to the platform. Yet often, illegality depends crucially on extrinsic contextual information that is not available to the platform. How could the adjudgment required by the Bill thus not be arbitrary?</span></p><p><span style="font-family: georgia;">14.<span style="white-space: pre;"> </span>For many offences the question of illegality is likely to revolve mainly around intent and available defences. The Bill requires platforms to assess illegality on the basis that the possibility of a defence is to be taken into account only if the platform has reasonable grounds to infer that a defence may successfully be relied upon. Yet the information from which the possibility of a defence (such as reasonable excuse) might be inferred will very often be extrinsic context that, especially for proactive obligations, is not available to a platform. Would that not inevitably require removal of content that is in fact legal? For automated real time systems would that not occur at scale?</span></p><p><span style="font-family: georgia;">15.<span style="white-space: pre;"> </span>The Bill requires platforms to have particular regard to the importance of protecting users’ right to freedom of expression ‘within the law’. Does that modify the express requirements of Clause 170 as to how a platform should assess illegality? If so, how?</span></p><p><span style="font-family: georgia;">16.<span style="white-space: pre;"> </span>The government’s European Convention on Human Rights Memorandum contains no discussion of the Bill’s illegality duties as a form of prior restraint. Nor does it address the human rights implications of the ‘reasonable grounds to infer’ clause, which was introduced later. Will the government issue a revised Memorandum?</span></p><p><span style="font-family: georgia;">17.<span style="white-space: pre;"> </span>Is it intended that the risks of harm to individuals to be mitigated and managed under Clause 9(2)(c) should be limited to those arising from illegality identified in the illegality risk assessment? If so, how does the Bill achieve that?</span></p><p><span style="font-family: georgia;">18.<span style="white-space: pre;"> </span>The Bill contains powers to require private messaging services to use accredited technology to identify CSEA content. It also contains an obligation to report all new detected material to the National Crime Agency. The Explanatory Notes state that services will be required to report all and any available information relating to instances of CSEA, including any that help identify a perpetrator or victim. </span></p><p><span style="font-family: georgia;">The White Paper noted that “Many children and young people take and share sexual images. Creating, possessing, copying or distributing sexual or indecent images of children and young people under the age of 18 is illegal, including those taken and shared by the subject of the image.” Does this mean that an under-18 consensually taking and sharing an indecent selfie on a private messaging platform would automatically be reported to the National Crime Agency if the image is detected by the platform?</span></p><p><span style="font-family: georgia;">19.<span style="white-space: pre;"> </span>What are the estimated familiarisation and compliance costs for an in-scope small business or voluntary user-to-user service? What is the calculation of the estimated costs? </span></p><p><span style="font-family: georgia;"></span></p><p><span style="font-family: georgia;">20.<span style="white-space: pre;"> </span>The Law Commission in 2018 stated that the common law public nuisance offence applied to online communications. The statutory replacement in s.78 of the Police, Crime, Sentencing and Courts Act 2022 does so too. Could a platform’s reactive duty under Cl. 9, combined with Cl. 170, require it to determine whether it has reasonable grounds to infer that a user’s post creates a risk of causing serious annoyance to a section of the public?</span></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhUGQVqoZ6eN4a_GvJ_6leSELCWQfkLyJF5QHwMx-gqElM33nyIcyrxxDJmBjtfe-PQJSBs7Ui0CX61latw6D7ym_gL6NdOKDO1iFWSnYjb7eI8aXmIxKGrKRY-5L3DXdSAUoX2CYxNkZtHUxDhOxHwER0SCkzcCIoBzTuUC3phGTcINmPUs-CbMCVblg/s135/snip2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="134" data-original-width="135" height="134" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhUGQVqoZ6eN4a_GvJ_6leSELCWQfkLyJF5QHwMx-gqElM33nyIcyrxxDJmBjtfe-PQJSBs7Ui0CX61latw6D7ym_gL6NdOKDO1iFWSnYjb7eI8aXmIxKGrKRY-5L3DXdSAUoX2CYxNkZtHUxDhOxHwER0SCkzcCIoBzTuUC3phGTcINmPUs-CbMCVblg/s1600/snip2.png" width="135" /></a></div><br /><div><br /></div>Cyberleaglehttp://www.blogger.com/profile/17507190182464072147noreply@blogger.com0tag:blogger.com,1999:blog-229721367671779922.post-6216398259382360412022-12-13T11:45:00.000+00:002022-12-13T11:45:10.669+00:00(Some of) what is legal offline is illegal online<p><span style="font-family: georgia;">From what feels like time immemorial the UK government has paraded its proposed online harms legislation under the banner of ‘What is Illegal Offline is Illegal Online’. As a description of what is now the Online Safety Bill, the slogan is ill-fitting. <a href="https://bills.parliament.uk/bills/3137" target="_blank">The Bill</a> contains nothing that extends to online behaviour a criminal offence that was previously limited to offline. </span></p><p><span style="font-family: georgia;">That is for the simple reason that almost no such offences exist. An exception that proves the rule is the law requiring imprints only on physical election literature, a gap that has been plugged not by the Online Safety Bill but by the Elections Act 2022. </span></p><p><span style="font-family: georgia;">If the slogan is intended to mean that since what is illegal offline is illegal online, equivalent mechanisms should be put in place to combat online illegality, that does not compute either. As we shall see, the Bill's approach differs significantly from offline procedures for determining the illegality of individual speech - not just in form and process, but in the substantive standards to be applied. </span></p><p><span style="font-family: georgia;">Perhaps in implicit recognition of these inconvenient truths, the government’s favoured slogan has undergone many transformations:</span></p><p><span style="font-family: georgia;">-<span style="white-space: pre;"> </span>“We will be consistent in our approach to regulation of online and offline media.” (<a href="https://ucrel.lancs.ac.uk/wmatrix/ukmanifestos2017/localpdf/Conservatives.pdf" target="_blank">Conservative Party Manifesto</a>, 18 May 2017)</span></p><p><span style="font-family: georgia;">-<span style="white-space: pre;"> </span>“What is unacceptable offline should be unacceptable online.” (<a href="https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/650949/Internet_Safety_Strategy_green_paper.pdf" target="_blank">Internet Safety Strategy Green Paper</a>, October 2017)</span></p><p><span style="font-family: georgia;">-<span style="white-space: pre;"> </span>“Behaviour that is illegal offline should be treated the same when it’s committed online.” (Then Digital Minister Margot James, <a href="https://crimeline.co.uk/reform-of-the-criminal-law-needed-to-protect-victims-from-online-abuse-says-law-commission/" target="_blank">1 November 2018</a>)</span></p><p><span style="font-family: georgia;">-<span style="white-space: pre;"> </span>“A world in which harms offline are controlled but the same harms online aren’t is not sustainable now…” (Then Culture Secretary Jeremy Wright QC, <a href="https://dcmsblog.uk/2019/02/we-must-make-the-online-world-a-safer-place/" target="_blank">21 February 2019</a>) </span></p><p><span style="font-family: georgia;">-<span style="white-space: pre;"> </span>“For illegal harms, it is also important to ensure that the criminal law applies online in the same way as it applies offline” (<a href="https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/973939/Online_Harms_White_Paper_V2.pdf" target="_blank">Online Harms White Paper, April 2019</a>)</span></p><p><span style="font-family: georgia;">-<span style="white-space: pre;"> </span>"Of course... what is illegal offline is illegal online, so we have existing laws to deal with it." (Home Office Lords Minister Baroness Williams, <a href="https://committees.parliament.uk/oralevidence/359/default/" target="_blank">13 May 2020</a>)</span></p><p><span style="font-family: georgia;">-<span style="white-space: pre;"> </span>“If it’s unacceptable offline then it’s unacceptable online” (DCMS, <a href="https://twitter.com/DCMS/status/1338829911780438022" target="_blank">tweet</a> 15 December 2020)</span></p><p><span style="font-family: georgia;">-<span style="white-space: pre;"> </span>"If it is illegal offline, it is illegal online.” (Then Culture Secretary Oliver Dowden, <a href="https://hansard.parliament.uk/commons/2020-12-15/debates/1B8FD703-21A5-4E85-B888-FFCC5705D456/OnlineHarmsConsultation" target="_blank">House of Commons</a> 15 December 2020)</span></p><p><span style="font-family: georgia;">-<span style="white-space: pre;"> </span>“The most important provision of [our coming online harms legislation] is to make what's illegal on the street, illegal online” (Then Culture Secretary Oliver Dowden, <a href="https://www.telegraph.co.uk/politics/2021/03/28/social-media-companies-will-forced-take-posts-illegal-street/" target="_blank">29 March 2021</a>)</span></p><p><span style="font-family: georgia;">-<span style="white-space: pre;"> </span> “What's illegal offline should be regulated online.” (Damian Collins, then Chair of the Joint Pre-Legislative Scrutiny Committee, <a href="https://committees.parliament.uk/committee/534/draft-online-safety-bill-joint-committee/news/159784/no-longer-the-land-of-the-lawless-joint-committee-reports/" target="_blank">14 December 2021</a>)</span></p><p><span style="font-family: georgia;">-<span style="white-space: pre;"> </span>“The laws we have established to protect people in the offline world, need to apply online as well.” (Then former DCMS Minister Damian Collins MP, <a href="https://damiancollins.com/keynote-speech-european-data-summit-2022/" target="_blank">2 Dec 2022</a>) </span></p><p><span style="font-family: georgia;">Now, extolling its newly revised Bill, the government has reverted to simplicity. DCMS’s <a href="https://twitter.com/DCMS/status/1601185931918462976" target="_blank">social media infographics</a> once more proclaim that ‘What is illegal offline is illegal online’.</span></p><p><span style="font-family: georgia;">The underlying message of the slogan is that the Bill brings online and offline legality into alignment. Would that also mean that what is <i>legal</i> offline is (or should be) <i>legal</i> online? The newest Culture Secretary Michelle Donelan appeared to endorse that when announcing the abandonment of ‘legal but harmful to adults’: "<a href="https://questions-statements.parliament.uk/written-statements/detail/2022-11-29/hlws385" target="_blank">However admirable the goal, I do not believe that it is morally right to censor speech online that is legal to say in person</a>." </span></p><p><span style="font-family: georgia;">Commendable sentiments, but does the Bill live up to them? Or does it go further and make illegal online some of what is legal offline? I suggest that in several respects it does do that.</span></p><p><span style="font-family: georgia;"><b>Section 127 – the online-only criminal offence</b></span></p><p><span style="font-family: georgia;">First, consider illegality in its most commonly understood sense: criminal offences.</span></p><p><span style="font-family: georgia;">The latest version of the Bill scraps the previously proposed new harmful communications offence, reinstating S.127(1) of the Communications Act 2003 which it would have replaced. The harmful communications offence, for all its <a href="https://www.cyberleagle.com/2021/11/licence-to-chill.html" target="_blank">grievous shortcomings</a>, made no distinction between offline and online. S.127(1), however, is online only. Moreover, it is more restrictive than any offline equivalent.</span></p><p><span style="font-family: georgia;">S.127(1) is the offence that, <a href="https://www.cyberleagle.com/2015/02/from-telegram-to-tweet-section-127-and.html" target="_blank">notoriously</a>, makes it an offence to send by means of a public electronic communications network a “message or other matter that is grossly offensive or of an indecent, obscene or menacing character”. It is difficult to be sure of its precise scope – indeed one of the main objections to it is the vagueness inherent in ‘grossly offensive’. But it has no direct offline counterpart. </span></p><p><span style="font-family: georgia;">The closest equivalent is the Malicious Communications Act 1988, also now to be reprieved. The MCA applies to both offline and online communications. Whilst like S.127(1) it contains the ‘grossly offensive” formulation, it is narrower by virtue of a purpose condition that is absent in S.127(1). Also the MCA offence appears not to apply to generally available, non-targeted postings on an online platform (<a href="https://www.lawcom.gov.uk/abusive-and-offensive-online-communications/" target="_blank">Law Commission Scoping Report 2018</a>, paras 4.26 to 4.29). That leaves S.127(1) not only broader in substance, but catching many kinds of online communication to which the MCA does not apply at all.</span></p><p><span style="font-family: georgia;">Para 4.63 of the Law Commission Scoping Report noted: “Indeed, as subsequent Chapters will illustrate, section 127 of the CA 2003 criminalises many forms of speech that would not be an offence in the “offline” world, even if spoken with the intention described in section 127.”</span></p><p><span style="font-family: georgia;">For S.127(1) that situation will be continued - at least while the government gives further consideration to the criminal law on harmful communications. But although the new harmful communications offence was rightly condemned, was the government really faced with having to make a binary choice between frying pan and fire?</span></p><p><span style="font-family: georgia;"><b>Online liability to have content filtered or removed</b></span></p><p><span style="font-family: georgia;">Second, we have illegality in terms of ‘having my content compulsorily removed’.</span></p><p><span style="font-family: georgia;">This is not illegality in the normal sense of liability to be prosecuted and found guilty of a criminal offence. Nor is it illegality in the sense of being sued and found liable in the civil courts. It is more akin to an author having their book seized with no further sanction. We lawyers may debate whether this is illegality properly so called. To the user whose online post is filtered or removed it will certainly feel like it, even though no court has declared the content illegal or ordered its seizure.</span></p><p><span style="font-family: georgia;">The Bill creates this kind of illegality (if it be such) in a novel way: an online post would be filtered or removed by a platform because it is required to do so by virtue of a preventative or reactive duty of care articulated in the Bill. This creature of statute has - for speech - no offline equivalent. See discussion <a href="https://www.cyberleagle.com/2018/10/take-care-with-that-social-media-duty.html" target="_blank">here</a> and <a href="https://www.cyberleagle.com/2019/06/speech-is-not-tripping-hazard-response.html" target="_blank">here</a>. </span></p><p><span style="font-family: georgia;">The online-offline asymmetry does not stop there. If we dig more deeply into a comparison with criminal offences we find other ways in which the Bill’s illegality duty treats online content more restrictively than offline. </span></p><p><span style="font-family: georgia;">Two features stand out, both stemming from the Bill's <a href="https://www.cyberleagle.com/2022/07/platforms-adjudging-illegality-online.html" target="_blank">recently inserted clause</a> setting out how online platforms should adjudge the illegality of users' content.</span></p><p><span style="font-family: georgia;"><b>The online illegality inference engine</b></span></p><p><span style="font-family: georgia;">First, in contrast to the criminal standard of proof – beyond reasonable doubt – the platform is required to find illegality if it has ‘reasonable grounds to infer’ that the elements of the offence are present. That applies both to factual elements and to any required purpose, intention or other mental element.</span></p><p><span style="font-family: georgia;">The acts potentially constituting an offence may be cast widely, in which event the most important issues are likely to be intent and whether the user has an available defence (such as, in some cases, reasonable excuse). </span></p><p><span style="font-family: georgia;">Under </span><span style="font-family: georgia;">the Bill, unless the platform has information on the basis of which it can infer that a defence may successfully be relied on, the possibility of a defence is to be left out of consideration.</span><span style="font-family: georgia;"> That leads into the second feature.</span></p><p><span style="font-family: georgia;"><b>The online information vacuum</b></span></p><p><span style="font-family: georgia;">The Bill requires platforms to determine illegality on the basis of information reasonably available to them. But how much (or little) information is that likely to be? </span><span style="font-family: georgia;"> </span></p><p><span style="font-family: georgia;">Platforms will be required to make decisions on illegality in a comparative knowledge vacuum. The paucity of information is most apparent in the case of proactive, automated real time filtering. A system can work only on user content that it has processed, which inevitably omits extrinsic contextual information. </span></p><p><span style="font-family: georgia;">For many offences, especially those in which defences such as reasonable excuse bear the main legality burden, such absent contextual information would otherwise be likely to form an important, even decisive, part of determining whether an offence has been committed. </span></p><p><span style="font-family: georgia;">For both of these reasons the Bill’s approach to online would inevitably lead to compulsory filtering and removal of legal online content at scale, in a way that has no counterpart offline. It is difficult to see how a requirement on platforms to have regard (or particular regard, as a proposed government amendment would have it) to the </span><span style="font-family: georgia;">importance of protecting users’ right to freedom of </span><span style="font-family: georgia;">expression within the law</span><span style="font-family: georgia;"> could act as an effective antidote to express terms of the legislation that spell out how platforms should adjudge illegality.</span></p><p><span style="font-family: georgia;"><b>Online prior restraint</b></span></p><p><span style="font-family: georgia;">These two features exist against the background that the illegality duty is a <a href="https://bills.parliament.uk/publications/46665/documents/1879" target="_blank">form of prior restraint</a>: the Bill requires content filtering and removal decisions to be made before any fully informed, fully argued decision on the merits takes place (if it ever would). A presumption against prior restraint has <a href="https://www.cyberleagle.com/2022/03/mapping-online-safety-bill.html" target="_blank">long formed part of the English common law</a> and of human rights law. For online, no longer.</span></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjsAZsAx-50Wh5rbxH7jK9wNT30PpPmepI39WTyz4bn7SDX-ktZ7JhM0bVHW4kiPlC4ADMbAd5XIboK-puWp--M47aplN14j9XQk9F64DPro1R6XYoT8rCHSoy3WyHAPqNBVfZCHZgQhAH7EvUXfLC45NpESbcLoxqAM3t3pvcg0NkqFaarUzoWKrw0lA/s135/snip2.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="134" data-original-width="135" height="134" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjsAZsAx-50Wh5rbxH7jK9wNT30PpPmepI39WTyz4bn7SDX-ktZ7JhM0bVHW4kiPlC4ADMbAd5XIboK-puWp--M47aplN14j9XQk9F64DPro1R6XYoT8rCHSoy3WyHAPqNBVfZCHZgQhAH7EvUXfLC45NpESbcLoxqAM3t3pvcg0NkqFaarUzoWKrw0lA/s1600/snip2.png" width="135" /></a></div><br /><div><br /></div>Cyberleaglehttp://www.blogger.com/profile/17507190182464072147noreply@blogger.com0tag:blogger.com,1999:blog-229721367671779922.post-47804068412747162022-11-25T17:25:00.042+00:002023-01-09T16:00:59.930+00:00How well do you know the Online Safety Bill?<p class="MsoNormal"><span style="font-family: georgia;">With the Online Safety Bill returning to the Commons next month, this is an opportune moment to refresh our knowledge of the Bill. The</span><span style="font-family: georgia;"> labels
on the tin hardly require repeating: children, harm, tech giants, algorithms, trolls, abuse and the rest. But, to beat a well-worn drum, what really matters is what is inside the tin. </span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Below is a miscellany of statements about the Bill: familiar slogans and narratives, a few random assertions, some that I have dreamed up to tease out lesser-known features. True, false, half true, indeterminate? Click on the expandable text to find out.
</span></p>
<details><summary><span style="font-family: georgia; font-size: medium;">The Bill makes illegal online what is illegal offline.</span></summary><br /><span style="font-family: georgia;">No. We have to go a long way to find a criminal offence that does not already apply online as well as offline (other than those such as driving a car without a licence, which by their nature can apply only to the physical world). One of the few remaining anomalies is the paper-only requirement for imprints on election literature – a gap that will be plugged when the relevant provisions of the Elections Act 2022 come into force. </span><br /><br /><span style="font-family: georgia;">
Moreover, in its fundamentals the Bill <a href="https://mediawrites.law/online-harms-white-paper-series-whatever-happened-to-online-offline-equivalence/" target="_blank">departs from the principle of online-offline equivalence</a>. Its duties of care are extended in ways that <a href="https://www.cyberleagle.com/2018/10/take-care-with-that-social-media-duty.html" target="_blank">have no offline comparable</a>. It creates a <a href="https://www.cyberleagle.com/2018/10/a-lord-chamberlain-for-internet-thanks.html" target="_blank">broadcast-style Ofcom regulatory regime</a> that has no counterpart for individual speech offline: regulation by discretionary regulator rather than by clear, certain, general laws. </span><br /><br /><span style="font-family: georgia;">
The real theme underlying the Bill is far removed from offline-online equivalence. It is that online speech is different from offline: more reach, more persistent, more dangerous and more in need of a regulator’s controlling hand.
</span></details>
<br />
<details><summary><span style="font-family: georgia; font-size: medium;">Under the Bill's safety duty, before removing a user's post a platform will have to be
satisfied to the criminal standard that it is illegal.</span></summary><br /><span style="font-family: georgia;">No. The current version of the Bill sets <a href="https://www.cyberleagle.com/2022/07/platforms-adjudging-illegality-online.html" target="_blank">‘reasonable grounds to infer’</a> as the platform’s threshold for adjudging illegality. </span><br /><br /><span style="font-family: georgia;">
Moreover, unlike a court that comes to a decision after due consideration of all the available evidence on both sides, a platform will be required to make up its (or its algorithms') mind about illegality on the basis of whatever information is available to it, however incomplete that may be. For proactive monitoring of ‘priority offences’, that would be the user content processed by the platform’s automated filtering systems. The platform would also have to ignore the possibility of a defence unless they have reasonable grounds to infer that one may be successfully relied upon. </span><br /><br /><span style="font-family: georgia;">
The mischief of a low threshold is that legitimate speech will inevitably be suppressed at scale under the banner of stamping out illegality. In a <a href="https://hansard.parliament.uk/lords/2022-10-27/debates/8F08CFEB-BCD5-4D02-B35C-B4B54B299A50/FreedomOfExpression(CommunicationsAndDigitalCommitteeReport)">recent House of Lords debate</a> Lord Gilbert, who chaired the Lords Committee that produced a <a href="https://committees.parliament.uk/publications/6878/documents/72529/default/" target="_blank">Report on Freedom of Expression in the Digital Age</a>, asked whether the government had considered a change in the standard from “reasonable grounds to believe” to “manifestly illegal”. The government minister replied by referring to the "reasonable grounds to infer" amendment, which he said would protect against both under-removal and over-removal of content.</span></details>
<br />
<details><summary><span style="font-family: georgia; font-size: medium;">The Bill will repeal the S.127 Communications Act 2003 offences.</span></summary><br /><span style="font-family: georgia;">
Half true. Following a recommendation by the England and Wales Law Commission the Bill will replace both S.127 (of Twitter Joke Trial notoriety) and the Malicious Communications Act 1988 with new offences, notably <a href="https://www.cyberleagle.com/2021/11/licence-to-chill.html" target="_blank">sending a harmful communication</a>.<br /><br />
However, the repeal of S.127 is only for England and Wales. S.127 will continue in force in Scotland. As a result, for the purposes of a platform’s illegality safety duty the Bill will deem the remaining Scottish S.127 offence to apply throughout the UK. So in deciding whether it has reasonable grounds to infer illegality a platform would have to apply both the existing S.127 and its replacement. <span style="color: red;">[Update: the government <a href="https://www.gov.uk/government/news/new-protections-for-children-and-free-speech-added-to-internet-laws">announced</a> on 28 November 2022 that the 'grossly offensive' offences under S.127(1) and the MCA 1988 will no longer be repealed, following its decision to drop the new harmful communications offence.]</span> </span></details>
<br />
<details><summary><span style="font-family: georgia; font-size: medium;">A platform may be required to adjudge whether a post causes spiritual injury.</span></summary><br /><span style="font-family: georgia;">
True. The <a href="https://bills.parliament.uk/bills/3154" target="_blank">National Security Bill</a> will create a new offence of foreign interference. One route to committing the offence involves establishing that the conduct involves coercion. An example of coercion is given as “causing spiritual injury to, or placing undue spiritual pressure on, a person”.</span><br /><br /><span style="font-family: georgia;">
The new offence would be designated as a priority offence under the Online Safety Bill, meaning that platforms would have to take proactive steps to prevent users encountering such content.</span></details>
<br />
<details><summary><span style="font-family: georgia; font-size: medium;">A platform may be required to adjudge whether a post represents a contribution to a matter of public interest.</span></summary><br /><span style="font-family: georgia;">
True. The new <a href="https://www.cyberleagle.com/2021/11/licence-to-chill.html" target="_blank">harmful communications offence</a> (originating from a <a href="https://www.lawcom.gov.uk/project/reform-of-the-communications-offences/" target="_blank">recommendation by the Law Commission</a>) provides that the prosecution must prove, among other things, that the sender has no reasonable excuse for sending the message. Although not determinative, one of the factors that the court must consider (if it is relevant in a particular case) is whether the message is, or is intended to be, a contribution to a matter of public interest. </span><br /><br /><span style="font-family: georgia;">
A platform faced with a complaint that a post is illegal by virtue of this offence would be put in the position of making a judgment on public interest, applying the standard of whether it has reasonable grounds to infer illegality. During the Commons Committee stage the then Digital Minister Chris Philp <a href="https://hansard.parliament.uk/Commons/2022-06-07/debates/90e5ab5b-a47b-4750-9c47-3f0ac7cd46eb/OnlineSafetyBill(SixthSitting)" target="_blank">elaborated</a> on the task that a platform would have to undertake. It would, he said, perform a "balancing exercise" in assessing whether the content was a contribution to a matter of public interest. <span style="color: red;">[Update: the government </span><a href="https://www.gov.uk/government/news/new-protections-for-children-and-free-speech-added-to-internet-laws">announced</a><span style="color: red;"> on 28 November 2022 that the proposed new harmful communications offence will be dropped.]</span></span><br /><br /><span style="font-family: georgia;">
The House of Lords Communications and Digital Committee <a href="https://committees.parliament.uk/publications/6878/documents/72529/default/" target="_blank">Report on Freedom of Speech in the Digital Age</a> contains the following illuminating exchange: 'We asked the Law Commission how platforms’ algorithms and content moderators could be expected to identify posts which would be illegal under its proposals. Professor Lewis told us: “We generally do not design the criminal law in such a way as to make easier the lives of businesses that will have to follow it.”' However, it is the freedom of speech of users, not businesses, that is violated by the arbitrariness inherent in requiring platforms to adjudge vague laws.</span></details>
<br />
<details><summary><span style="font-family: georgia; font-size: medium;">Platforms would be required to filter users’ posts.</span></summary><br /><span style="font-family: georgia;">
Highly likely, at least for for some platforms. All platforms would be under a duty to take proportionate proactive steps to prevent users encountering priority illegal content, and (for services likely to be accessed by children) to prevent children from encountering priority content harmful to children. The Bill gives various examples of such steps, ranging from user support to content moderation, but the biggest clues are in the Code of Practice provisions and the enforcement powers granted to Ofcom.</span><br /><br /><span style="font-family: georgia;">
Ofcom is empowered to recommend in a Code of Practice (if proportionate for a platform of a particular kind or size) proactive technology measures such as algorithms, keyword matching, image matching, image classification or behaviour pattern detection in order to detect publicly communicated content that is either illegal or harmful to children. Its enforcement powers similarly include use of proactive technology. Ofcom would have additional powers to require accredited proactive technology to be used in relation to terrorism content and CSEA (including, for CSEA, in relation to private messages).</span></details>
<br />
<details><summary><span style="font-family: georgia; font-size: medium;">The Bill regulates platforms, not users.</span></summary><br /><span style="font-family: georgia;">
False dichotomy. The Bill certainly regulates platforms, but does so by pressing them into service as proxies to control content posted by users. The Bill thus regulates users at one remove. It also contains new criminal offences that would be committed directly by users.</span></details>
<br />
<details><summary><span style="font-family: georgia; font-size: medium;">The Bill outlaws hurting people's feelings.</span></summary><br /><span style="font-family: georgia;">
No, but the new <a href="https://www.cyberleagle.com/2021/11/licence-to-chill.html">harmful communications offence</a> comes close. It would criminalise sending, with no reasonable excuse, a message carrying a real and substantial risk that it would cause psychological harm - amounting to at least serious distress - to a likely member of the audience, with the intention of causing such harm. There is no requirement that the response of a hypothetical seriously distressed audience member should be reasonable. One foreseeable hypersensitive outlier is enough. Nor is there any requirement to show that anyone was actually seriously distressed.<br /><br />
The Law Commission, which recommended this offence, considered that it would be kept within bounds by the need to prove intent to cause harm and the need to prove lack of reasonable excuse, both to the criminal standard. However, the standard to which platforms will operate in assessing illegality is <a href="https://www.cyberleagle.com/2022/07/platforms-adjudging-illegality-online.html" target="_blank">reasonable grounds to infer</a>. <span style="color: red;">[Update: the government </span><a href="https://www.gov.uk/government/news/new-protections-for-children-and-free-speech-added-to-internet-laws">announced</a><span style="color: red;"> on 28 November 2022 that the proposed new harmful communications offence will be dropped.]</span><br /><br /></span>
<span style="font-family: georgia;">The Bill also refers to psychological harm in other contexts, but without defining it further. The government intends that psychological harm should not be limited to a medically recognised condition.</span></details><br />
<details><summary><span style="font-family: georgia; font-size: medium;">The Bill recriminalises blasphemy.</span></summary><br /><span style="font-family: georgia;">
Quite possibly. Blasphemy was abolished as a criminal offence in England and Wales in 2008 and in Scotland in 2021. The possible impact of the harmful communications offence (see previous item) has to be assessed against the background that people undoubtedly exist who experience serious distress (or at least claim to do so) upon encountering content that they regard as insulting to their religion.</span> <span style="color: red; font-family: georgia;">[Update: the government </span><a href="https://www.gov.uk/government/news/new-protections-for-children-and-free-speech-added-to-internet-laws" style="font-family: georgia;">announced</a><span style="color: red; font-family: georgia;"> on 28 November 2022 that the proposed new harmful communications offence will be dropped.]</span></details>
<br />
<details><summary><span style="font-family: georgia; font-size: medium;">The Bill is all about Big Tech and large social media companies.</span></summary><br /><span style="font-family: georgia;">No. Whilst the biggest “Category 1” services would be subject to additional obligations, the Bill’s core duties would apply to an estimated 25,000 UK service providers from the largest to the smallest, and whether or not they are run as businesses. That would include, for instance, discussion forums run by not-for-profits and charities. Distributed social media instances operated by volunteers also appear to be in scope.</span></details>
<br />
<details><summary><span style="font-family: georgia; font-size: medium;">The Bill is all about algorithms that push and amplify user content.</span></summary><br /><span style="font-family: georgia;">No. The Bill makes occasional mention of algorithms, but the core duties would apply regardless of whether a platform makes use of algorithmic curation. A plain vanilla discussion forum is within scope.</span></details>
<br />
<details><summary><span style="font-family: georgia; font-size: medium;">The Secretary of State can instruct Ofcom to modify its Codes of Practice.</span></summary><br /><span style="font-family: georgia;">
True. Section 40 of the Bill empowers the Secretary of State to direct OFCOM to modify a draft code of practice if the Secretary of State believes that modifications are required (a) for reasons of public policy, or (b) in the case of a terrorism or CSEA code of practice, for reasons of national security or public safety. The Secretary of State can keep sending the modified draft back for further modification.</span></details>
<br />
<details><summary><span style="font-family: georgia; font-size: medium;">A platform will be required to remove content that is legal but harmful to adults.</span></summary><br /><span style="font-family: georgia;">No. The legal but harmful to adults duty (should it survive in the Bill) applies only to Category 1 platforms and on its face only requires transparency. Some have argued that its effect will nevertheless be heavily to incentivise Category 1 platforms to remove such content. </span><span style="color: red; font-family: georgia;">[Update: the government </span><a href="https://www.gov.uk/government/news/new-protections-for-children-and-free-speech-added-to-internet-laws" style="font-family: georgia;">announced</a><span style="color: red; font-family: georgia;"> on 28 November 2022 that the legal but harmful to adults duty will be dropped.]</span></details>
<br />
<details><summary><span style="font-family: georgia; font-size: medium;">The Bill is about systems and processes, not content moderation.</span></summary><br /><span style="font-family: georgia;">
False dichotomy. Whilst the Bill's </span><span style="font-family: georgia;">illegality and harm to children </span><span style="font-family: georgia;">duties are couched in terms of <a href="https://www.cyberleagle.com/2021/11/the-draft-online-safety-bill-systemic.html">systems and processes</a>, it also lists measures that a service provider is required to take or use to fulfil those duties, if it is proportionate to do so. Content moderation, including taking down content, is in the list. It is no coincidence that the government’s <a href="https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1061265/Online_Safety_Bill_impact_assessment.pdf" target="_blank">Impact Assessment</a> estimates additional moderation costs over a 10 year period at nearly £2 billion.</span></details>
<br />
<details><summary><span style="font-family: georgia; font-size: medium;">Ofcom could ban social media quoting features.</span></summary><br /><span style="font-family: georgia;">Indeterminate. Some may take the view that enabling social media quoting encourages toxic behaviour (the reason why the founder of Mastodon <a href="https://mastodon.social/@Gargron/99662106175542726" target="_blank">did not include a quote feature</a>). A proponent of requiring more friction might argue that it is the kind of non-content oriented feature that should fall within the ‘safety by design’ aspects of a duty of care - an approach that some regard as preferable to moderating specific content.<br /><br />Ofcom deprecation of a design feature would have to be tied to some aspect of a safety duty under the Bill and perhaps to risk of physical or psychological harm. There would likely have to be evidence (not just an opinion) that the design feature in question contributes to a relevant kind of risk within the scope of the Bill. From a proportionality perspective, it has to be remembered that friction-increasing proposals typically strike at all kinds of content: illegal, harmful, legal and beneficial. <br /><br />Of course the Bill does not tell us which design features should or should not be permitted. That is in the territory of the significant discretion (and consequent power) that the Bill places in the hands of Ofcom. If it were considered to be within scope of the Bill and proportionate to deprecate a particular design feature, in principle Ofcom could make a recommendation in a Code of Practice. That would leave it to the platform either to comply or to explain how it satisfied the relevant duty in some other way. Ultimately Ofcom could seek to invoke its enforcement powers.</span></details>
<br />
<details><summary><span style="font-family: georgia; font-size: medium;">The Bill will outlaw end to end encryption.</span></summary><br /><span style="font-family: georgia;">Not as such, but... . Ofcom will be given the power to issue a notice requiring a private messaging service to use accredited technology to scan for CSEA material. A recent government amendment to the Bill provides that a provider given such a notice has to make such changes to the design or operation of the service as are necessary for the technology to be used effectively. That opens the way to requiring E2E encryption to be modified if it is incompatible with the accredited technology - which might, for instance, involve client-side scanning. Ofcom can also require providers to <a href="https://techcrunch.com/2022/11/24/uk-online-safety-bill-css-e2ee/">use best endeavours develop or source their own scanning technology</a>.<br /><br /> The government’s <a href="https://committees.parliament.uk/committee/534/draft-online-safety-bill-joint-committee/news/163403/government-responds-to-joint-committees-recommended-improvements-to-online-safety-bill/" target="_blank">response</a> to the Pre-legislative Scrutiny Committee is also illuminating: “End-to-end encryption should not be rolled out without appropriate safety mitigations, for example, the ability to continue to detect known CSEA imagery.” </span></details>
<br />
<details><summary><span style="font-family: georgia; font-size: medium;">The press are exempt.</span></summary><br /><span style="font-family: georgia;">
True up to a point, but <a href="https://www.cyberleagle.com/2021/06/carved-out-or-carved-up-online-safety.html" target="_blank">it’s complicated</a>.<br /><br /> First, user comments under newspaper and broadcast stories are intended to be exempt as ‘limited functionality’ under Schedule 1 (but the permitted functionality is extremely limited, for instance apparently excluding comments on comments).<br /><br /> Second, platforms' safety duties do not apply to recognised news publisher content appearing on their services. However, many news and other publishers will fall outside the exemption. <br /><br /> Third, various press and broadcast organisations are exempted from the new <strike>harmful and</strike> false communication<strike>s</strike> offences created by the Bill. </span><span style="color: red; font-family: georgia;">[Update: the government </span><a href="https://www.gov.uk/government/news/new-protections-for-children-and-free-speech-added-to-internet-laws" style="font-family: georgia;">announced</a><span style="color: red; font-family: georgia;"> on 28 November 2022 that the proposed new harmful communications offence will be dropped.]</span></details>
<br /><div class="separator" style="clear: both; text-align: left;"><span style="color: red;">[<span style="font-family: georgia;">Updated 3 December 2022 to take account of the government <a href="https://www.gov.uk/government/news/new-protections-for-children-and-free-speech-added-to-internet-laws" target="_blank">announcement</a> on 28 November 2022.]</span></span></div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgfEHjpmC5a9zoqzAlZHG3HSNcVCWprNXoEMYyJ4_lNJPeA3-Y_U2jWx1AA-W-OYpIQAr6aDl7cDmWLP7nnZyISrjq2W6oA_xdgg5dvdcb9q8uAdnMICxrnjfTXPyFYTmpE8m2--ohQ29yZK_crVpKfbbKMrQcZ0BlYTzykwzoWJpWZwuZ9Jp5GFZgryw/s135/snip2.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="134" data-original-width="135" height="134" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgfEHjpmC5a9zoqzAlZHG3HSNcVCWprNXoEMYyJ4_lNJPeA3-Y_U2jWx1AA-W-OYpIQAr6aDl7cDmWLP7nnZyISrjq2W6oA_xdgg5dvdcb9q8uAdnMICxrnjfTXPyFYTmpE8m2--ohQ29yZK_crVpKfbbKMrQcZ0BlYTzykwzoWJpWZwuZ9Jp5GFZgryw/s1600/snip2.png" width="135" /></a></div><br />
Cyberleaglehttp://www.blogger.com/profile/17507190182464072147noreply@blogger.com0tag:blogger.com,1999:blog-229721367671779922.post-30403978908850193712022-11-02T17:58:00.000+00:002022-11-02T17:58:05.886+00:00On the Dotted Line<p><span style="font-family: georgia;">The topic of electronic signatures seems cursed to eternal life. </span><span style="font-family: georgia;">In the blue corner we have the established liberal English
law approach to signatures, which eschews formality and emphasises intention to
authenticate. In the red corner we have preoccupation with verifying identity
of the signatory, with technically engineered digital signatures and with the EU’s
eIDAS hierarchy of qualified, advanced and ordinary electronic signatures.</span></p>
<p class="MsoNormal"><span style="font-family: georgia;">In the English courts the blues have it. Judges have
upheld the validity of electronic signatures as informal as signing a name at
the end of an e-mail or even, in one case, clicking an ‘I accept’ button on an
electronic form. They have been able to do this partly because, with very few exceptions,
the England and Wales legislature has refrained from stipulating use of an eIDAS-compliant
qualified or advanced signature as a condition of validity. The EIDAS hierarchy
does form part of our law, but – rather like the Interpretation Act - in
the guise of a toolkit that is available to be used or not as the legislature
wishes. The toolkit has for the most part remained on the legislative shelf.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The potential consequences of stipulating eIDAS-style
formalities in legislation are graphically illustrated by the Austrian case of the <a href="https://www.cryptomathic.com/news-events/blog/all-trains-cancelled-how-an-e-signature-failure-derailed-a-3bn-swiss-austrian-transport-deal">Wrong
Kind of Signature</a>. A €3bn contract to supply double-decker trains to
Austrian Federal Railways was invalidated because the contract was signed with
a qualified electronic signature supported by a Swiss, rather than an EU,
Trusted Service Provider.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The modern English law aversion to imposition of formalities
was pithily encapsulated in an official committee report of 1937, describing the
Statute of Frauds:<o:p></o:p></span></p>
<p class="MsoNormal" style="margin-left: 36.0pt;"><span style="font-family: georgia;">““'The Act', in the words of Lord
Campbell . . . 'promotes more frauds than it prevents'. True it shuts out
perjury; but it also and more frequently shuts out the truth. It strikes
impartially at the perjurer and at the honest man who has omitted a precaution,
sealing the lips of both. Mr Justice FitzJames Stephen ... went so far as to
assert that 'in the vast majority of cases its operation is simply to enable a
man to break a promise with impunity, because he did not write it down with
sufficient formality.’ ”<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">For its part eIDAS continues to complicate and confound. February’s
<a href="https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1051451/electronic-execution-documents-industry-working-group-interim-report.pdf">Interim
Report</a> of the Industry Working Group on the Electronic Execution of
Documents, running to 94 pages of discussion, stated that ‘only’ qualified
electronic signatures have equivalent legal status to handwritten signatures
(meaning, according to the Report, that they carry a presumption of
authenticity). Yet while eIDAS does require equivalent legal effect (<a href="https://www.cyberleagle.com/2020/05/decrypting-eidas.html">whatever that
may mean</a>) to be accorded to qualified signatures, it does not require other kinds of electronic signature to be denied that status; nor has English domestic
law done so.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Back in the courts, a recent decision of Senior Costs Judge Gordon-Saker
in <i><a href="https://www.bailii.org/ew/cases/EWHC/Costs/2022/2574.html" target="_blank">Elias v Wallace LLP</a></i> [2022] EWHC 2574 (SCCO) continues down the
road of upholding the validity of informal electronic signatures. Under the
Solicitors Act 1974 (as amended) a solicitor’s bill cannot be enforced by legal
proceedings unless it complies with certain formalities, including that it has
to be:<o:p></o:p></span></p>
<p class="MsoNormal" style="margin-left: 36.0pt;"><span style="font-family: georgia;">“(a) signed by the solicitor or
on his behalf by an employee of the solicitor authorised by him to sign, or<o:p></o:p></span></p>
<p class="MsoNormal" style="margin-left: 36.0pt;"><span style="font-family: georgia;">(b) enclosed in, or accompanied
by, a letter which is signed as mentioned in paragraph (a) and refers to the
bill.”<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The Act states that the signature may be an electronic
signature. It takes its definition of electronic signature from s.7(2) of the
Electronic Communications Act 2000<span class="MsoFootnoteReference"><span style="mso-special-character: footnote;"><!--[if !supportFootnotes]--><span class="MsoFootnoteReference"><span style="line-height: 107%;">[1]</span></span><!--[endif]--></span></span>,
as amended: <span style="mso-spacerun: yes;"> </span><o:p></o:p></span></p>
<p class="MsoNormal" style="text-indent: 36.0pt;"><span style="font-family: georgia;">“… so much of anything in
electronic form as –<o:p></o:p></span></p>
<p class="MsoListParagraphCxSpFirst" style="margin-left: 90.0pt; mso-add-space: auto; mso-list: l0 level1 lfo2; text-indent: -18.0pt;"><!--[if !supportLists]--><span style="font-family: georgia;"><span style="mso-bidi-font-family: Georgia; mso-fareast-font-family: Georgia;"><span style="mso-list: Ignore;">(a)<span style="font-stretch: normal; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;">
</span></span></span><!--[endif]-->is incorporated into or otherwise logically
associated with any electronic communication or electronic data; and<o:p></o:p></span></p>
<p class="MsoListParagraphCxSpLast" style="margin-left: 90.0pt; mso-add-space: auto; mso-list: l0 level1 lfo2; text-indent: -18.0pt;"><!--[if !supportLists]--><span style="font-family: georgia;"><span style="mso-bidi-font-family: Georgia; mso-fareast-font-family: Georgia;"><span style="mso-list: Ignore;">(b)<span style="font-stretch: normal; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;">
</span></span></span><!--[endif]-->purports to be used by the individual creating
it to sign.” <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">This is an unusual example of English legislation
stipulating compliance with a defined kind of signature (albeit that S.7(2) is
framed in very broad terms) as a condition of validity. Most legislation requiring
a signature goes no further than a generally stated requirement that the
document must be signed<span class="MsoFootnoteReference"><span style="mso-special-character: footnote;"><!--[if !supportFootnotes]--><span class="MsoFootnoteReference"><span style="line-height: 107%;">[2]</span></span><!--[endif]--></span></span>.
<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The bills in question were sent to the solicitor’s client as
e-mail attachments. The bills themselves were not signed, but the covering
e-mails concluded with the words: <o:p></o:p></span></p>
<p class="MsoNormal" style="text-indent: 36.0pt;"><span style="font-family: georgia;">“Best regards,<o:p></o:p></span></p>
<p class="MsoNormal" style="text-indent: 36.0pt;"><span style="font-family: georgia;">Alex<o:p></o:p></span></p>
<p class="MsoNormal" style="text-indent: 36.0pt;"><span style="font-family: georgia;">[first name and surname]<o:p></o:p></span></p>
<p class="MsoNormal" style="text-indent: 36.0pt;"><span style="font-family: georgia;">Partner<o:p></o:p></span></p>
<p class="MsoNormal" style="text-indent: 36.0pt;"><span style="font-family: georgia;">[telephone numbers, firm name and
physical and website addresses]”.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The judge held:</span></p><p class="MsoNormal"></p><ol style="text-align: left;"><li><span style="font-family: georgia;"><span style="text-indent: -18pt;">The printed name of the firm incorporated in the
invoice, like a letterheading, was not a signature. This unsurprising conclusion
is reminiscent of </span><i style="text-indent: -18pt;">Mehta v J Pereira Fernandes SA</i><span style="text-indent: -18pt;"> [2006] EWHC 813 in
which the same was held for an e-mail address appearing at the top of an e-mail.</span></span></li><li><span style="text-indent: -18pt;"><span style="font-family: georgia;">If the name ‘Alex’ was not generated
automatically, clearly it purported to be used as a signature.</span></span></li><li><span style="font-family: georgia;"><span style="text-indent: -18pt;">If the name ‘Alex’ was auto-generated, then on
the authority of </span><i style="text-indent: -18pt;"><a href="https://www.cyberleagle.com/2019/10/whose-e-signature-is-it-anyway.html">Neocleous
v Rees</a></i><span style="text-indent: -18pt;"> that would constitute a signature. The e-mail footer was clearly
applied with authenticating intent, even if it was the product of a rule.</span></span></li></ol><p></p>
<span style="font-family: georgia;"><span style="line-height: 107%;">The judge also held that
‘letter’ should be interpreted to include e-mail. That is a salutary reminder
that the ability to conduct a transaction electronically may not be only a
question of whether electronic signatures are permissible. Other requirements
of form and process can also come into play.</span>
</span><div style="mso-element: footnote-list;"><div id="ftn1" style="mso-element: footnote;"><p class="MsoFootnoteText"><span style="font-family: georgia; font-size: x-small;"><span class="MsoFootnoteReference"><span style="mso-special-character: footnote;"><span class="MsoFootnoteReference"><span style="line-height: 107%;">[1]</span></span><!--[endif]--></span></span>
Note that the role of S.7 was to make explicit (almost certainly unnecessarily)
that electronic signatures as defined by the section were admissible as
evidence, whereas the Solicitors Act provision concerns substantive validity.<o:p></o:p></span></p>
</div>
<div id="ftn2" style="mso-element: footnote;">
<p class="MsoFootnoteText"><span style="font-family: georgia; font-size: x-small;"><span class="MsoFootnoteReference"><span style="mso-special-character: footnote;"><!--[if !supportFootnotes]--><span class="MsoFootnoteReference"><span style="line-height: 107%;">[2]</span></span><!--[endif]--></span></span>
As to which, see the England and Wales Law Commission’s Statement of the Law in
its <a href="https://s3-eu-west-2.amazonaws.com/lawcom-prod-storage-11jsxou24uy7q/uploads/2019/09/Electronic-Execution-Report.pdf">Report
on Electronic Execution of Documents</a> (2019).</span><o:p></o:p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiZZYrurw3-5evY0lmUPfqUntQRbpvGquMckjYdp5uA2iWYrZ9oXV69PznQk-pQXoJhuKcIZgEcSImK4bmnlHuORKyHqurDrJO-iPXdXoxByqr-J2ifW6U4c5CeycUPMV69kfcG2pqxJMdk674E-HbT-AA-pHBfh4_ddmkYk7L5OeE4qM_qt-DMXt7pHA/s135/snip2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="134" data-original-width="135" height="134" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiZZYrurw3-5evY0lmUPfqUntQRbpvGquMckjYdp5uA2iWYrZ9oXV69PznQk-pQXoJhuKcIZgEcSImK4bmnlHuORKyHqurDrJO-iPXdXoxByqr-J2ifW6U4c5CeycUPMV69kfcG2pqxJMdk674E-HbT-AA-pHBfh4_ddmkYk7L5OeE4qM_qt-DMXt7pHA/s1600/snip2.png" width="135" /></a></div><br /><p class="MsoFootnoteText"><br /></p>
</div>
</div>Cyberleaglehttp://www.blogger.com/profile/17507190182464072147noreply@blogger.com1tag:blogger.com,1999:blog-229721367671779922.post-66145579526413645872022-08-18T18:47:00.003+01:002022-10-09T12:35:25.172+01:00Reimagining the Online Safety Bill<p><span style="font-family: georgia;">“The brutal truth is that nothing is likely to trip up the <a href="https://www.cyberleagle.com/2022/03/mapping-online-safety-bill.html" target="_blank">Online Safety Bill</a>.” So began a blogpost on which I was working just over a month ago. Fortunately, it was still unfinished when Boris Johnson imploded for the final time, the Conservative leadership election was triggered, and candidates – led by Kemi Badenoch - started to voice doubts about the freedom of speech implications of the Bill. Then the Bill’s Commons Report stage was put on hold until the autumn, to allow the new Prime Minister to consider how to proceed.</span></p><p><span style="font-family: georgia;">The resulting temporary vacuum has sucked in commentary from all sides, whether redoubled criticisms of the Bill, renewed pursuit of existing agendas, or alarm at the prospect of further delays to the legislation.</span></p><p><span style="font-family: georgia;">Delay, it should be acknowledged, was always hardwired into the Bill. The Bill’s regulatory regime, even at </span><a href="https://bills.parliament.uk/bills/3137" style="font-family: georgia;" target="_blank">a weighty 218 pages</a><span style="font-family: georgia;">, </span><span style="font-family: georgia;">is a bare skeleton. It will have to be fleshed out by a sequence of secondary legislation</span><span style="font-family: Georgia, serif; vertical-align: super;"><span style="font-size: xx-small;">1</span></span><span style="font-family: georgia;">, Ofcom codes of practice</span><span style="font-family: Georgia, serif; vertical-align: super;"><span style="font-size: xx-small;">2</span></span><span style="font-family: georgia;">, Ofcom guidance</span><span style="font-family: Georgia, serif; vertical-align: super;"><span style="font-size: xx-small;">3</span></span><span style="font-family: georgia;">, and designated categories of service providers - each with its own step by step procedures. That kind of long drawn-out process was inevitable once the decision was taken to set up a broadcast-style regime under the auspices of a discretionary regulator such as Ofcom. </span></p><p class="MsoNormal"><span style="font-family: georgia;">In July 2022 Ofcom published an <a href="https://www.ofcom.org.uk/__data/assets/pdf_file/0016/240442/online-safety-roadmap.pdf" target="_blank">implementation road-map</a> </span><span style="font-family: georgia;">that would result in the earliest aspect of the regulatory regime (illegality safety duties) going live in mid-2024. We have to wonder whether that would have proved to be optimistic even without the current leadership hiccup and – presumably - a period of reflection before the Bill can proceed further.</span></p><p class="MsoNormal"><span style="font-family: georgia;">The Bill has the feel of a social architect’s dream house: an elaborately designed, exquisitely detailed (eventually), expensively constructed but ultimately uninhabitable showpiece; a showpiece, moreover, erected on an <a href="https://www.cyberleagle.com/2019/04/users-behaving-badly-online-harms-white.html" target="_blank">empty foundation</a>: the notion that a legal duty of care can sensibly be extended beyond risk of physical injury to subjectively perceived speech harms. </span></p><p class="MsoNormal"><span style="font-family: georgia;">As such, it would not be surprising if, as the Bill proceeded, implementation were to recede ever more tantalisingly out of reach. As the absence of foundations becomes increasingly exposed, the Bill may be in danger not just of delay but of collapsing into the hollow pit beneath, leaving behind a smoking heap of internal contradictions and unsustainable offline analogies.</span></p><p class="MsoNormal"><span style="font-family: georgia;">If, under a new Prime Minister, the government were to reimagine the Online Safety Bill, how might they do it? Especially, how might they achieve a quick win: a regime that could be put into effect immediately, rather than the best part of two years later - if ever? </span></p><p class="MsoNormal"><span style="font-family: georgia;">The most vulnerable part of the Bill is probably the ‘legal but harmful to adults’ provisions</span><span style="font-family: Georgia, serif; vertical-align: super;"><span style="font-size: xx-small;">4</span></span><span style="font-family: georgia;">. However, controversial as they undoubtedly are, those are far from the most problematic features of the Bill.</span></p><p class="MsoNormal"><span style="font-family: georgia;">Here are some other aspects that might be under the spotlight.</span></p><p class="MsoNormal"><span style="font-family: georgia;"><b>The new communications offences</b></span></p><p class="MsoNormal"><span style="font-family: georgia;">The least controversial part of the Bill ought to be the new Part 10 criminal offences. Those could, presumably, come into force shortly after Royal Assent. However, some of them badly need fixing.</span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The new communications offences</span><span style="font-family: Georgia, serif; vertical-align: super;"><span style="font-size: xx-small;">5</span></span><span style="font-family: georgia;"> have been designed to
replace the Malicious Communications Act 1988 and the </span><a href="https://www.cyberleagle.com/2015/02/from-telegram-to-tweet-section-127-and.html" style="font-family: georgia;" target="_blank">notorious S.127 Communications Act 2003</a><span style="font-family: georgia;">. They have the authority of the Law Commission behind
them.</span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Unfortunately, <a href="https://www.cyberleagle.com/2021/11/licence-to-chill.html" target="_blank">the new offences are a mess</a>. The harmful communications offence</span><span style="font-family: Georgia, serif; vertical-align: super;"><span style="font-size: xx-small;">6</span></span><span style="font-family: georgia;">, in particular, will plausibly create
a veto for those most readily distressed by encountering views that they regard
as deeply repugnant, even if that reaction is unreasonable. That prospect, and
the consequent risk of legal online speech being chilled or removed, is exacerbated
when the offence is </span><a href="https://twitter.com/cyberleagle/status/1504096890400432132" style="font-family: georgia;" target="_blank">combined with the illegality duty</a><span style="font-family: georgia;"> that the Bill, in its
present form, would impose on all U2U platforms and search engines.</span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Part 10 of the Bill also has the air of unfinished business,
with calls for further new offences such as deliberately sending flashing
images to epileptics.<o:p></o:p></span></p>
<p class="MsoNormal"><b><span style="font-family: georgia;">Make it about safety?<o:p></o:p></span></b></p>
<p class="MsoNormal"><span style="font-family: georgia;">The 2017 Green Paper that started all this was entitled <a href="https://www.gov.uk/government/consultations/internet-safety-strategy-green-paper" target="_blank">Internet Safety Strategy</a>. Come the <a href="https://www.gov.uk/government/consultations/online-harms-white-paper" target="_blank">April 2019 White Paper</a>, that had metamorphosed
into Online Harms. </span><span style="font-family: georgia;">Some have criticised the Bill’s reversion to Online Safety, although in truth the change is more label than substance. It does, however, prompt the question whether a desire for some quick wins would be served by focusing the Bill, in substance as well as in name, on safety in its core sense.</span></p><p class="MsoNormal"><span style="font-family: georgia;">That is where much of the original impetus for the Bill stemmed from. Suicide, grooming, child abuse, physically dangerous ‘challenges’, violence – these are the stuff of safety-related duties of care. <a href="https://www.cyberleagle.com/2018/10/take-care-with-that-social-media-duty.html" target="_blank">It is well within existing duty of care parameters</a> to consider whether a platform has done something that creates or exacerbates a risk of physical injury as between users; then, whether a duty of care should be imposed; and if so, a duty to take what kind of steps (preventative or reactive) and in what circumstances. Some kinds of preventative duty, however, involve the imposition of <a href="https://www.cyberleagle.com/2017/05/time-to-speak-up-for-article-15.html" target="_blank">general monitoring obligations</a>, which are controversial. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">A Bill focused on safety in its core sense – risk of physical injury - might usefully clarify and codify, in the body of the Bill, the contents of such a duty of care and the circumstances in which it would arise. A distinction might, for example, be drawn between positively promoting an item of user content, compared with simply providing a forum akin to a traditional message board or threading a conversation. </span></p><p class="MsoNormal"><span style="font-family: georgia;">Duties of care are feasible for risk of physical injury, because physical injury is objectively identifiable. Physical injuries may differ in degree, but a bruise and a broken wrist are the same kind of thing. We also have an understanding of what gives rise to risk of physical injury, be it an unguarded lathe or a loose floorboard. </span></p><p class="MsoNormal"><span style="font-family: georgia;">The same is not true for amorphous conceptions of harm that depend on the subjective perception of the person who encounters the speech in question.</span><span style="font-family: georgia;"> </span><a href="https://www.cyberleagle.com/2019/06/speech-is-not-tripping-hazard-response.html" style="font-family: georgia;" target="_blank">Speech is not a tripping hazard</a><span style="font-family: georgia;">.</span><span style="font-family: georgia; mso-spacerun: yes;"> </span><span style="font-family: georgia;">Broader harm-based duties of care do not work in the same way, if at all, for controversial opinions, hate, blasphemy, bad language, insults, and all the myriad kinds of speech that to a greater or lesser extent excite condemnation, inflame emotions, or provoke anger, distress and assertions of risk of suffering psychological harm. </span></p><p class="MsoNormal"><span style="font-family: georgia;">A subjective harm-based duty of care requires the platform to assess and weigh those considerations against the freedom of speech not only of the poster, but of all other users who may react differently to the same speech, then decide which should prevail. That is a fundamentally different exercise from the assessment of risk of physical injury that underpins a safety-related duty of care. An approach that assumes that risk of subjectively perceived speech harms can be approached in the same way as risk of objectively identifiable physical injury will inevitably end up floundering in the kind of morass in which the Bill now finds itself.</span></p><p class="MsoNormal"><span style="font-family: georgia;">The difference from risk of physical injury was, perhaps unwittingly, illustrated in the context of the illegality duty by the then Digital Minister Chris Philp in the Bill’s Commons Committee stage. He was discussing the task that platforms would perform in deciding whether user content was illegal under the new ‘harmful communications’ offence (above). The platform would, he said, perform a balancing exercise in assessing whether the content was a contribution to a matter of public interest. No balancing exercise is necessary to determine whether a broken wrist is or is not a physical injury.</span></p><p class="MsoNormal"><span style="font-family: georgia;">Again within the illegality duty, the new foreign interference offence under Clause 13 of the National Security Bill would be designated as a priority offence under the Online Safety Bill. That would require platforms to adjudge, among other things, risk of “spiritual injury”. </span></p><p class="MsoNormal"><span style="font-family: georgia;">The principled way to address speech considered to be beyond the pale is for Parliament to make clear, certain, objective rules about it – whether that be a criminal offence, civil liability on the user, or a self-standing rule that a platform is required to apply. Drawing a clear line, however, requires Parliament to give careful consideration not only to what should be caught by the rule, but to what kind of speech should not be caught, even if it may not be fit for a vicar’s tea party. Otherwise it draws no line, is not a rule and fails <a href="https://www.cyberleagle.com/2019/03/a-ten-point-rule-of-law-test-for-social.html" target="_blank">the rule of law test</a>: that legislation should be drawn so as to enable anyone to foresee, with reasonable certainty, the consequences of their proposed action. <o:p></o:p></span></p>
<p class="MsoNormal"><b><span style="font-family: georgia;">Rethink the illegality duty?</span></b><span style="font-family: Georgia, serif; vertical-align: super;"><span style="font-size: xx-small;">7</span></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Requiring platforms to remove illegal user content sounds
simple, but isn’t. During the now paused Commons Report Stage debate on the
Bill, Sir Jeremy Wright QC (who ironically, was the Secretary of State for
Culture at the time when the White Paper was launched in April 2019), observed:<o:p></o:p></span></p>
<p class="MsoNormal" style="margin-left: 36pt;"><span style="font-family: georgia;">“When people first look at this
Bill, they will assume that everyone knows what illegal content is and
therefore it should be easy to identify and take it down, or take the
appropriate action to avoid its promotion. … <o:p></o:p></span></p>
<p class="MsoNormal" style="margin-left: 36pt;"><span style="font-family: georgia;">… criminal offences very often
are not committed just by the fact of a piece of content; they may also require
an intent, or a particular mental state, and they may require that the
individual accused of that offence does not have a proper defence to it.<o:p></o:p></span></p>
<p class="MsoNormal" style="margin-left: 36pt;"><span style="font-family: georgia;">The question of course is how on
earth a platform is supposed to know either of those two things in each case.” <span style="mso-spacerun: yes;"> </span><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">He might also have added that the relevant factual material
for any given offence will often include information that is outside anything
of which the platform can have knowledge, especially for real-time automated
filtering systems</span><span style="font-family: Georgia, serif; vertical-align: super;"><span style="font-size: xx-small;">8</span></span><span style="font-family: georgia;">.</span></p>
<p class="MsoNormal"><span style="font-family: georgia;">In any event, it is pertinent to ask <a href="https://www.cyberleagle.com/2021/11/the-draft-online-safety-bill-concretised.html" target="_blank">how many offences exist for which illegality can be determined with confidence simply by looking at the content itself</a> and nothing else? Illegality often requires assessment of intention (sometimes, but not always, intention can be inferred from the content), purpose, or of extrinsic factual information. The Bill now contains an <a href="https://www.cyberleagle.com/2022/07/platforms-adjudging-illegality-online.html" target="_blank">illuminating, but ultimately unsatisfactory</a>,
attempt (New Clause 14) to address these issues.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The underlying problem with applying the duty of care concept to illegality is that illegality is a complex legal construct, not an objectively ascertainable fact like physical injury. Adjudging its existence (or risk of such) requires both factual information (often contextual) and interpretation of the law. There is a high risk that legal content will be removed, especially for real time filtering at scale. For this reason, it is strongly arguable that <a href="https://bills.parliament.uk/publications/46665/documents/1879">human rights compliance requires a high threshold to be set for content to be assessed as illegal</a>.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Given the increasingly (if belatedly) apparent problems with the illegality duty, what options might a government coming to it with a fresh eye consider? The current solution, as with so many problematic aspects of the Bill, is to hand it off to Ofcom. New Clause 15 would require Ofcom to produce guidance on how platforms should go about adjudging illegality in accordance with NC 14.</span></p><p class="MsoNormal"><span style="font-family: georgia;">Assuming that the illegality duty were not dropped altogether, other possibilities might include:</span></p><p class="MsoNormal"></p><ul style="text-align: left;"><li><span style="font-family: georgia; text-indent: -18pt;">Restrict any duty to offences where the existence of
an offence (including any potentially available defences) is realistically
capable of being adjudged on the face of the content itself with no further information<sup><span style="font-family: Georgia, serif; line-height: 115%;"><span style="font-size: xx-small;">9</span></span></sup>.</span></li><li><span style="font-family: georgia;">For in-scope offences, raise the illegality
determination threshold from reasonable grounds to infer to manifest illegality<sup><span style="font-family: Georgia, serif; line-height: 115%;"><span style="font-size: xx-small;">10</span></span></sup>.</span></li></ul><p></p>
<p class="MsoNormal"><span style="font-family: georgia;">Steps of this kind might in any event be necessary to achieve ECHR compliance. They would also reflect broader traditions of protection of freedom of speech, such as the <a href="https://bills.parliament.uk/publications/46665/documents/1879">presumption against prior restraint</a>.<o:p></o:p></span></p>
<p class="MsoNormal"><b><span style="font-family: georgia;">Illegality across the UK<o:p></o:p></span></b><span style="font-family: Georgia, serif; vertical-align: super;"><span style="font-size: xx-small;">11</span></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Another consequence of illegality being a legal construct is
that criminal offences vary across the UK. </span><span style="font-family: georgia;">The Bill requires a platform, when preventing or removing user content under its illegality duty, to treat a criminal offence in one part of the UK as if it applied to the whole of the UK. This has the <a href="https://bills.parliament.uk/publications/46671/documents/1882" target="_blank">bizarre consequence</a> that platforms will, for instance, have to apply in parallel both the new harmful communications offence contained in the Bill and its repealed predecessor, S.127 of the Communications Act 2003. </span></p><p class="MsoNormal"><span style="font-family: georgia;">Why is that? Because S.127 will be repealed only for England and Wales and will remain in force in Scotland and Northern Ireland. Platforms would have to treat S.127 as if it still applied in England and Wales; and, conversely, the new England and Wales harmful communications offence as if it applied in Scotland and Northern Ireland. </span></p><p class="MsoNormal"><span style="font-family: georgia;">In Committee on 14 June 2022 the then Minister confirmed that: </span></p>
<p class="MsoNormal" style="margin-left: 36pt;"><span style="font-family: georgia;">“…the effect of the clauses is a
levelling up—if I may put it that way. Any of the offences listed effectively
get applied to the UK internet, so if there is a stronger offence in any one
part of the United Kingdom, that will become applicable more generally via the
Bill.”<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Of course the alternative of requiring platforms to undertake the (in reality impossible) task of deciding which UK law – England and Wales, Northern Ireland or Scotland - applied to which post or tweet, would hardly be less problematic.</span></p><p class="MsoNormal"><span style="font-family: georgia;">At Report stage the government added an amendment to the Bill</span><span style="font-family: Georgia, serif; vertical-align: super;"><span style="font-size: xx-small;">12</span></span><span style="font-family: georgia;"> which would, for the future, mean that the non-priority illegality duty would apply only to offences enacted by the UK Parliament, or by devolved administrations with the consent of the Westminster government. If nothing else, that shines a brighter spotlight on the problem. </span><span style="font-family: georgia; mso-spacerun: yes;"> </span><span style="font-family: georgia; mso-spacerun: yes;"> </span></p>
<p class="MsoNormal"><b><span style="font-family: georgia;">The role of Ofcom<o:p></o:p></span></b></p>
<p class="MsoNormal"><span style="font-family: georgia;">The most radical option, were the government looking for a legislative quick win that cuts out delay, would be to jettison Ofcom and its companion two year procession of secondary legislation, guidance and codes of practice. </span></p><p class="MsoNormal"><span style="font-family: georgia;">In truth, as a matter of principle it was</span><span style="font-family: georgia;"> </span><a href="https://www.cyberleagle.com/2018/10/a-lord-chamberlain-for-internet-thanks.html" style="font-family: georgia;" target="_blank">always a bad idea to apply discretionary broadcast-style regulation to individual speech</a><span style="font-family: georgia;">. The way
to govern individual speech is with clear, certain laws of general application.
A government so minded might consider aligning practicality with principle.</span></p>
<p class="MsoNormal"><span style="font-family: georgia;">If Ofcom’s role were to be scrapped, what could replace it? One alternative might be to take existing common law duties of care relating to risk of physical injury as the starting point, clarify and codify them on the face of the Bill (not by secondary legislation), and provide for enforcement otherwise than through a regulator. </span></p><p class="MsoNormal"><span style="font-family: georgia;">That would require the scope and content of any duties under the Bill to be articulated, Goldilocks-style, to a reasonable level of clarity: not so abstract as to be vague, not so technology and business model-specific as to be unworkable, but just right. Granted, that is easier said than done; but still perhaps more achievable than attempting to launch the overladen supertanker that is now marooned in its passage through Parliament. </span></p><p class="MsoNormal"><span style="font-family: georgia;">This approach would, however, raise questions about how some kinds of duty could be made compatible with the <a href="https://www.cyberleagle.com/2018/04/the-electronic-commerce-directive.html" target="_blank">hosting protections</a> originating in the EU ECommerce Directive, to which the government remains committed (unlike the <a href="https://www.cyberleagle.com/2017/05/time-to-speak-up-for-article-15.html">prohibition on general monitoring obligations</a>, from which the government has distanced itself). </span></p><p class="MsoNormal"><span style="font-family: georgia;">There would then be the question of Ofcom’s proposed powers to issue notices requiring providers to use approved scanning and filtering technology</span><span style="font-family: Georgia, serif; vertical-align: super;"><span style="font-size: xx-small;">13</span></span><span style="font-family: georgia;">. Those powers are at best controversial, raising a plethora of issues of their own. If such powers were to be continued in some form, they could be a candidate for separate legislation, so that issues such as impact on privacy and end-to-end encryption could be brought out into the open and given the full debate that they deserve. </span></p>
<p class="MsoNormal"><b><span style="font-family: georgia;">Other parts of the Bill <o:p></o:p></span></b></p>
<p class="MsoNormal"><span style="font-family: georgia;">There is much more to the Bill than the parts discussed so far: fraudulent advertising</span><span style="font-family: Georgia, serif; vertical-align: super;"><span style="font-size: xx-small;">14</span></span><span style="font-family: georgia;"> and age-verification of pornographic websites</span><span style="font-family: Georgia, serif; vertical-align: super;"><span style="font-size: xx-small;">15</span></span><span style="font-family: georgia;"> have Parts of the Bill to themselves. There are numerous children-related provisions</span><span style="font-family: Georgia, serif; vertical-align: super;"><span style="font-size: xx-small;">16</span></span><span style="font-family: georgia;">, which to an extent overlap with the ICO Code of Practice on age-appropriate design and the currently mothballed Digital Economy Act 2017. Other aspects of the Bill include </span><a href="https://www.cyberleagle.com/2021/06/carved-out-or-carved-up-online-safety.html" style="font-family: georgia;" target="_blank">press exemptions promised at the time of the White Paper</a><span style="font-family: Georgia, serif; vertical-align: super;"><span style="font-size: xx-small;">17</span></span><span style="font-family: georgia;">(which always looked likely to be undeliverable and are still the subject of heavy debate); and the provisions constraining how Category 1 platforms can treat journalism and content of democratic importance</span><span style="font-family: Georgia, serif; vertical-align: super;"><span style="font-size: xx-small;">18</span></span><span style="font-family: georgia;">. </span></p><p class="MsoNormal"><span style="font-family: georgia;">These are all crafted on the basis of a regulatory regime operated by Ofcom. It would not be a simple matter to disentangle them from Ofcom, should the government contemplate a non-Ofcom fast track.</span></p>
<p class="MsoNormal"><b><span style="font-family: georgia;">Online ASBIs<o:p></o:p></span></b></p>
<p class="MsoNormal"><span style="font-family: georgia;">A refocused Bill could face the objection that it
does not address some of the most unpleasant, yet currently legal, user
behaviour that can be found online. That does not rule out the possibility of legislation
that on its face (not consigned to secondary legislation) draws clear lines
that may differ from those that apply today. <span style="mso-spacerun: yes;"> But i</span>f Parliament is unwilling or unable to draw
clear lines to govern behaviour regarded as beyond the pale, what other possibilities exist?<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">One answer, albeit itself somewhat controversial, is already
sitting on the legislative shelf, but (as far as can be seen) appears in the
online context to be gathering dust.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The Anti-Social Behaviour, Crime and Policing Act 2014
contains a procedure for some authorities to obtain a civil anti-social behaviour
injunction (ASBI, the successor to ASBOs) against someone who has engaged or
threatens to engage in anti-social behaviour, meaning “conduct that has caused,
or is likely to cause, harassment, alarm or distress to any person”. That
succinctly describes online disturbers of the peace, albeit in very broad terms.
It maps readily on to the most egregious abuse, cyberbullying, harassment and
the like.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The Home Office Statutory Guidance on the use of the 2014
Act powers (revised in December 2017, August 2019 and June 2022) makes no
mention of their use in relation to online behaviour. Yet nothing in the
legislation restricts an ASBI to offline activities. Indeed, over 10 years ago
The Daily Telegraph <a href=" www.telegraph.co.uk/technology/3355432/Teenager-handed-internet-Asbo.html" target="_blank">reported</a> an 'internet ASBO' made under predecessor legislation
against a 17 year old who had been posting material on the social media
platform Bebo. The order banned him from publishing material that was
threatening or abusive and promoted criminal activity.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">ASBIs raise difficult questions of how they should be framed
and of proportionality. Some may have concerns about the broad terms in which
anti-social behaviour is defined in the legislation. Nevertheless, the courts
to which applications are made should, at least in principle, have the societal
and institutional legitimacy, as well as the experience and capability, to
weigh such factors. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">That said, the July 2020 Civil Justice Council Report
“<a href="https://www.judiciary.uk/wp-content/uploads/2020/10/ASBI-final-accessible.pdf" target="_blank">Anti-Social Behaviour and the Civil Courts</a>” paints a somewhat dispiriting picture of the use of ASBIs offline. It highlights a practice of applying for orders
<i>ex parte</i> – something that would be especially troubling for an ASBI that
would affect the defendant’s freedom of expression. Concerns of that kind would
have to be carefully addressed if online ASBIs were to be picked up and dusted
off. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">On the positive side, the usefulness of online ASBIs could
be transformed if the government were to explore the possibility of extending </span><span style="font-family: georgia;">beyond the official authorities </span><span style="font-family: georgia;">the ability to apply to court for an online ASBI , for instance to selected voluntary organisations.</span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Finally, for a longer-term view of access to justice
online, Section 9 of my <a href="https://www.cyberleagle.com/2019/06/speech-is-not-tripping-hazard-response.html" target="_blank">submission</a> to the Online Harms White Paper
consultation has some blue-sky thoughts.</span></p><p class="MsoNormal"><b><span style="line-height: 115%;"><span style="font-family: georgia;">Footnotes</span></span></b><span style="line-height: 115%;"><span style="font-family: georgia;"> (references are to the Bill sections
as at the Commons Report stage, and to amendments and new clauses (NC) adopted
during <a href="https://hansard.parliament.uk/commons/2022-07-12/debates/942C54C4-D672-492E-BAD9-195E3BB63724/OnlineSafetyBill" target="_blank">Report Stage on 22 July 2022</a> but not yet published as an amended Bill.)</span><span style="font-size: 9pt;"><o:p></o:p></span></span></p><p class="MsoNormal"><sup><span style="font-family: georgia;">1 <u>Required secondary legislation</u> S.53: priority
content harmful to children, primary priority content harmful to children; S.54:
priority content harmful to adults; NB S.55: regulation-making; S.56: Ofcom
review; S.60: reports to National Crime Authority; S.81/Sch 11: service
provider categorisation; S.98: overseas regulators; SS.141 and 142: super-complaints.
<o:p></o:p></span></sup></p><p class="MsoNormal"><sup><span style="font-family: georgia;">2 <u>Required Ofcom codes of practice</u> S.37/Sch 4: Terrorism content, CSEA content, other
duties (illegal content, children’s online safety, adults’ online safety, user
empowerment, content of democratic importance, journalistic content, content
reporting, complaints procedures); fraudulent advertising (Cat 1 and 2A
providers). Code of practice measures
must be compatible with pursuit of specified online safety objectives (Sch 4
para 3, or as amended by regulations). Draft codes of practice are subject to
modification under Secretary of State’s power of direction (S.40(1)).<o:p></o:p></span></sup></p><p class="MsoNormal"><sup><span style="font-family: georgia;">3 <u>Required Ofcom guidance</u> S.48 (as amended at Report stage): service
provider record-keeping, review and children’s risk assessments, service
provider protection of news publisher content; S.58 (Cat 1 services): offer to users
of identity verification; S.65 (Cat 1, 2A and 2B services): transparency
reports; S.69 (regulated pornographic content providers): compliance with
duties; S.85: illegal content risk assessments, children’s risk assessments, Cat
1 service adults’ risk assessments; S.130: enforcement action; S.143: super-complaints;
NC15: service provider illegality judgements. Also note S.84:<u> Required</u> <u>Ofcom
risk assessment, risks register and risk profiles</u> for illegal content,
content harmful to children, content harmful to adults. <o:p></o:p></span></sup></p><p class="MsoNormal"><sup><span style="font-family: georgia;">4 <u>Legal but harmful to adults (Cat 1 services)</u> S.12:
adults risk assessment; S.13: transparency and other duties; S.14: user
empowerment; S.17(5): user content reporting; S.18(6): complaints procedures;
S.54: meanings of content harmful to adults and priority content harmful to
adults; S.55: regulations designating priority content harmful to adults; S.56:
Ofcom review; S.64/Sch 8 (Cat 1, 2A and 2B services): transparency reports;
S.81/Sch 11 (service provider categorisation); S.84: Ofcom risk assessment,
risks register and risk profiles; S.187: definition of ‘harm’ as physical or
psychological harm.<o:p></o:p></span></sup></p><p class="MsoNormal"><sup><span style="font-family: georgia;">5 <u>New communications offences</u> S.151 (harmful
communications); S.152 (false communications); S.153 (threatening communications);
S.154 (interpretation); S.155 (extraterritorial reach); S.156 Liability of
corporate officers. S.151 and 152 make use of problematic ‘likely audience’
tests. S.153 ought to be uncontroversial but has adopted wider language than
the Law Commission’s recommendation, resulting in possible overreach (discussed
<a href="https://www.cyberleagle.com/2021/11/licence-to-chill.html">here</a>). <o:p></o:p></span></sup></p><p class="MsoNormal"><sup><span style="font-family: georgia;">6 <u>Harmful communications offence</u> S.151.<o:p></o:p></span></sup></p><p class="MsoNormal"><sup><span style="font-family: georgia;">7 <u>Illegality duty</u> S.9 (U2U services), S.24
(search services).<o:p></o:p></span></sup></p><p class="MsoNormal"><sup><span style="font-family: georgia;">8 <u>Real-time automated filtering systems</u> S.9(3)(a)
and (b); <i>cf</i> also S.9(2); S.24(3)(a); <i>cf</i> also S.24(2); S.104:
accredited technology (terrorism content, CSEA content); S.117: proactive
technology requirement (illegal content, children’s online safety, fraudulent
advertising).<o:p></o:p></span></sup></p><p class="MsoNormal"><sup><span style="font-family: georgia;">9 <u>Capability of adjudging illegality on the face of
content alone</u> This would involve review of at least the priority offences
designated in Sch 7. <o:p></o:p></span></sup></p><p class="MsoNormal"><sup><span style="font-family: georgia;">10 <u>Manifest illegality</u> This would involve
reconsideration of NC 14. There is uncertainty in the Bill about whether, and
if so how far, a provider would be expected to go looking for information in
order to determine whether there were reasonable grounds to infer an offence (para
10 of the <a href="https://www.gov.uk/government/publications/fact-sheet-on-changes-to-the-illegal-content-duties-within-the-online-safety-bill/fact-sheet-on-changes-to-the-illegal-content-duties-within-the-online-safety-bill">Government
Fact Sheet</a> suggests that this would be left to Ofcom guidance). This seems most
likely to be relevant to the reactive duties specified in S.9(3)(c) and
S.24(3)(b) rather than to real time automated monitoring and filtering (S.9(3)(a)
and (b); <i>cf</i> also S.9(2); S.24(3)(a)).<o:p></o:p></span></sup></p><p class="MsoNormal"><sup><span style="font-family: georgia;">11 <u>Illegality applied across the UK</u> S.52(9) and (12).<o:p></o:p></span></sup></p><p class="MsoNormal"><sup><span style="font-family: georgia;">12 <u>Future devolved offences</u> Amendment 94. <o:p></o:p></span></sup></p><p class="MsoNormal"><sup><span style="font-family: georgia;">13 <u>Scanning and filtering technology powers</u> S.104: use of accredited technology (terrorism
content, CSEA content); S.117: inclusion of proactive technology requirements
in Ofcom confirmation decisions (illegal content, children’s online safety,
fraudulent advertising).<o:p></o:p></span></sup></p><p class="MsoNormal"><sup><span style="font-family: georgia;">14 <u>Fraudulent advertising (Cat 1 and Cat 2A
services)</u> SS. 34 and 35.<o:p></o:p></span></sup></p><p class="MsoNormal"><sup><span style="font-family: georgia;">15 <u>Pornographic site age verification</u> SS.66 to 68.<o:p></o:p></span></sup></p><p class="MsoNormal"><sup><span style="font-family: georgia;">16 <u>Children-related provisions</u> For service
providers the main duties for content harmful to children are set out in S.31: children’s
access assessments; SS.10 and 25: children’s risk assessments; SS.11 and 26: children’s
safety duties; S.53: primary priority, priority and non-designated content
harmful to children; S.187: definition of ‘harm’ as physical or psychological
harm. <o:p></o:p></span></sup></p><p class="MsoNormal"><sup><span style="font-family: georgia;">17 <u>News publisher content exemptions</u> S.49(2)(g) and S.51(2)(b): exclusion from
scope of safety duties; NC19 (Cat 1 services): duties to protect news publisher
content. S.49 (2)(e) comments and reviews on provider content. Recognised news
publishers are also among those exempted from two of the three new
communications offences: S.151(6)(a) (harmful communications); S.152(4)(a) (false
communications). <o:p></o:p></span></sup></p><p class="MsoNormal"><span style="font-family: georgia;">
<sup><span style="font-size: 11pt; line-height: 115%;">18 <u>Journalism and
content of democratic importance (Cat 1 services)</u> S.16: journalistic
content; S.15: content of democratic importance.</span></sup></span></p><p class="MsoNormal"><span style="color: red; font-family: georgia;">[Typo (2009) corrected to 2019, 19 Aug 2022. Footnotes added 9 Oct 2022.]</span></p><p class="MsoNormal"></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhYesicR2o0AHClnJM4ZeM1-8SunvoJMiQ0JmUlmj7oDUxwpK_7gHNZ9C74FN4Cz_WUHUkY0-8MotMBwg4cA5fDFqco2yzi4HVdRTABywOGW8cokOVb6igck0FZ9PZsg-cUrqqGtkDVVUaD6J61P6KxTaDGQonxivterhazn0_HCgeKXwjBs-UZtv6V1g/s135/snip2.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="134" data-original-width="135" height="134" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhYesicR2o0AHClnJM4ZeM1-8SunvoJMiQ0JmUlmj7oDUxwpK_7gHNZ9C74FN4Cz_WUHUkY0-8MotMBwg4cA5fDFqco2yzi4HVdRTABywOGW8cokOVb6igck0FZ9PZsg-cUrqqGtkDVVUaD6J61P6KxTaDGQonxivterhazn0_HCgeKXwjBs-UZtv6V1g/s1600/snip2.png" width="135" /></a></div><br /><span style="font-family: georgia;"><br /></span><p></p>Cyberleaglehttp://www.blogger.com/profile/17507190182464072147noreply@blogger.com0tag:blogger.com,1999:blog-229721367671779922.post-81053512461065384192022-07-30T22:24:00.002+01:002022-07-30T22:29:46.363+01:00Platforms adjudging illegality – the Online Safety Bill’s inference engine<p><span style="font-family: georgia;">The Online Safety Bill, b</span><span style="font-family: georgia;">efore the </span><span style="font-family: georgia;"><a href="https://www.bbc.co.uk/news/uk-62158287">pause button was pressed</a></span><span style="font-family: georgia;">, enjoyed a single day’s Commons Report stage
</span><a href="https://hansard.parliament.uk/commons/2022-07-12/debates/942C54C4-D672-492E-BAD9-195E3BB63724/OnlineSafetyBill" style="font-family: georgia;" target="_blank">debate</a><span style="font-family: georgia;"> on 12 July 2022. </span><span style="font-family: georgia;">Several government
amendments were passed and incorporated into the Bill.</span></p><p class="MsoNormal"><span style="font-family: georgia;"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">One of the most interesting additions is <a href="https://twitter.com/cyberleagle/status/1552925062189256713/photo/1">New Clause 14</a>
(NC14), which stipulates how user-to-user providers and search engines
should decide whether user content constitutes a criminal offence. This was previously
an under-addressed but nevertheless deep-seated problem for the Bill’s illegality duty. </span></p><p class="MsoNormal"><span style="font-family: georgia;">One underlying
issue is that (especially for real-time proactive filtering) providers are placed in the
position of having to make illegality decisions on the basis of a relative paucity
of information, often using automated technology. That tends to lead to
arbitrary decision-making.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Moreover, if the threshold for determining illegality is set
low, large scale over-removal of legal content will be baked into providers’
removal obligations. But if the threshold is set high enough to avoid over-removal,
much actually illegal content may escape. Such are the perils of requiring online intermediaries to act as detective, judge and bailiff.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">NC 14 looks like a response to <a href="https://terrorismlegislationreviewer.independent.gov.uk/missing-pieces-terrorism-legislation-and-the-online-safety-bill/" target="_blank">concerns raised in April 2022</a> by
the Independent Reviewer of Terrorism Legislation over how a service provider’s
illegality duty would apply to terrorism offences, for which (typically) the scope
of acts constituting an offence is extremely broad. The most significant limits on the offences are set by intention and available defences – neither of which may be apparent
to the service provider. As the Independent Reviewer put it:</span></p><p class="MsoNormal"></p><blockquote><span style="font-family: georgia; text-indent: 36pt;">“Intention, and the absence of
any defence, lie at the heart of terrorism offending”.</span></blockquote><p></p>
<p class="MsoNormal"><span style="font-family: georgia;">He gave five examples of unexceptional online
behaviour, ranging from uploading a photo of Buckingham Palace to soliciting
funds on the internet, which if intention or lack of a defence were simply
assumed, would be caught by the illegality duty. He noted:<o:p></o:p></span></p>
<p class="MsoNormal" style="margin-left: 36pt;"><span style="font-family: georgia;">“It cannot be the case that
where content is published etc. which might result in a terrorist offence being
committed, it should be <i>assumed</i> that the mental element is present, and
that no defence is available. </span><span style="font-family: georgia; text-indent: 36pt;">Otherwise, much lawful
content would “amount to” a terrorist offence.”</span></p>
<p class="MsoNormal"><span style="font-family: georgia;">If, he suggested, the intention of the Bill was that
inferences about mental element and lack of defence should be drawn, then the Bill
ought to identify a threshold. But if the intention was to set the bar at ‘realistic
to infer’, that:<o:p></o:p></span></p>
<p class="MsoNormal" style="margin-left: 36pt;"><span style="font-family: georgia;">“does not allow sufficiently for
freedom of speech. It may be “realistic” but wholly inaccurate to infer
terrorist intent in the following words: “I encourage my people to shoot the
invaders””<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Issues of this kind are not confined to terrorism offences.
There will be other offences for which context is significant, or where a
significant component of the task of keeping the offence within proper bounds is performed by
intention and defences. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Somewhat paraphrased, the answers provided to service
providers by NC 14 are:</span></p><p class="MsoNormal"></p><ul style="text-align: left;"><li><span style="font-family: georgia; text-indent: -18pt;">Base your illegality judgement on whatever relevant information
you or your automated system have reasonably available.</span></li><li><span style="font-family: georgia;">If you have reasonable grounds to infer that all
elements of the offence (including intention) are present, that is sufficient unless you have reasonable grounds to infer that a defence may be successful.</span></li><li><span style="font-family: georgia;">If an item of content surmounts the reasonable grounds
threshold, and you do not have reasonable grounds to infer a defence, then you
must treat it as illegal.</span></li></ul><p></p>
<p class="MsoNormal"><span style="font-family: georgia;">Factors relevant to reasonable availability of content include
the size and capacity of the service provider, and whether a judgement is made
by human moderators, automated systems or processes, or a combination of both. (The
illegality duty applies not just to large social media operators but to all
25,000 providers within the scope of the Bill.) <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Ofcom will be required to provide guidance to service
providers about making illegality judgements.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">What does this mean for users? It is users, let us not
forget, whose freedom of expression rights are at risk of being interfered with
as a result of the illegality removal duty imposed on service providers. The
duty can be characterised as a form of <a href="https://bills.parliament.uk/publications/46665/documents/1879" target="_blank">prior restraint</a>.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The first significant point concerns unavailability to the
provider of otherwise potentially relevant contextual information. If,
from information reasonably available to the provider (which at least for
automated systems may be only the content of the posts themselves and,
perhaps, related posts), it appears that there are reasonable grounds to infer
that an offence has been committed, that is enough. At least for automated real-time
systems, the possibility that extrinsic information might put the post in a
different light appears to be excluded from consideration, unless its existence
and content can be inferred from the posts themselves.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Alternatively, for offences that are highly dependent on context (including extrinsic context) would there </span><span style="font-family: georgia;">be a point at which </span><span style="font-family: georgia;">a provider could (or should) conclude that there is too little information available to support a determination of reasonable
grounds to infer?</span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Second, the difference between elements of the offence itself and an
available defence may be significant. The possibility of a defence is to be ignored unless
the provider has some basis in the information reasonably available to it on which to infer that a defence may be successful.
<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Take ‘reasonable excuse’. For the new harmful communications
offence that the Bill would enact, lack of reasonable excuse is an element of
the offence, not a defence. A provider could not conclude that the user’s post
was illegal unless it had reasonable grounds to infer (on the basis of the information
reasonably available to it) that there was no reasonable excuse.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">By contrast, two offences under the Terrorism Act 200o, S.58
(collection of information likely to be of use to a terrorist) and 58A (publishing
information about members of the armed forces etc likely to be of use to a
terrorist)) provide for a reasonable excuse defence. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The possibility of such a defence is to be ignored unless the
provider has reasonable grounds (on the basis of the information reasonably
available to it) to infer that a defence may be successful. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The difference is potentially significant when we consider that (for
instance) journalism or academic research constitutes a defence of reasonable excuse under S.58. Unless the material reasonably available to the provider (or its automated system) provides a basis on which to infer that </span><span style="font-family: georgia;">journalism or academic research </span><span style="font-family: georgia;">is the purpose of the act, the possibility of a journalism or academic research defence is to be ignored. (If, hypothetically, the offence had been drafted similarly to the harmful communications offence, so that lack of reasonable excuse was an element of the offence, then in order to adjudge the post as illegal the provider would have had to have reasonable grounds to infer that the purpose was <b>not</b> (for instance) journalism or academic research.)<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">NC14 was debated at Report Stage. The Opposition spokesperson,
Alex Davies-Jones, said: <o:p></o:p></span></p>
<p class="MsoNormal" style="margin-left: 36pt;"><span style="font-family: georgia;">“The new clause is deeply
problematic, and is likely to reduce significantly the amount of illegal
content and fraudulent advertising that is correctly identified and acted on.”<o:p></o:p></span></p>
<p class="MsoNormal" style="margin-left: 36pt;"><span style="font-family: georgia;">“First, companies will be
expected to determine whether content is illegal or fraudulently based on
information that is “reasonably available to a provider”, with reasonableness
determined in part by the size and capacity of the provider. That entrenches
the problems I have outlined with smaller, high-risk companies being subject to
fewer duties despite the acute risks they pose. Having less onerous
applications of the illegal safety duties will encourage malign actors to migrate
illegal activity on to smaller sites that have less pronounced regulatory
expectations placed on them.”<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">If information ‘reasonably
available to a provider” is insufficiently stringent, what information should a
provider be required to base its decision upon? Should it guess at information that it does
not have, or make assumptions (which would trespass into arbitrariness)?<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">In truth it is not so much NC14 itself that is deeply problematic,
but the underlying assumption (which NC14 has now exposed) that service
providers are necessarily in a position to determine illegality of user content,
especially where real time automated filtering systems are concerned.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Alex Davies-Jones went on: <o:p></o:p></span></p>
<p class="MsoNormal" style="margin-left: 36pt;"><span style="font-family: georgia;">“That significantly raises the
threshold at which companies are likely to determine that content is illegal.” <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">We might fairly ask: raises the threshold compared with what?
The draft Bill defined illegal user content as “content where the service
provider has reasonable grounds to believe that use or dissemination of the
content amounts to a relevant criminal offence.” That standard (<a href="https://www.cyberleagle.com/2022/02/harm-version-40-online-safety-bill-in.html" target="_blank">which would inevitably have resulted in over-removal</a>) was dropped from the Bill as
introduced into Parliament, leaving it unclear what standard service providers
were to apply.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The new Online Safety Minister (Damian Collins) said: <o:p></o:p></span></p>
<p class="MsoNormal" style="margin-left: 36pt;"><span style="font-family: georgia;">“The concern is that some social
media companies, and some users of services, may have sought to interpret the
criminal threshold as being based on whether a court of law has found that an
offence has been committed, and only then might they act. Actually, we want
them to pre-empt that, based on a clear understanding of where the legal
threshold is. That is how the regulatory codes work. So it is an attempt, not
to weaken the provision but to bring clarity to the companies and the regulator
over the application.”<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">In any event, if the Opposition were under the impression
that prior to NC14 the threshold in the Bill was lower than ‘reasonable grounds to infer’, what might that standard be? If service providers were obliged to remove user content
on (say) a mere suspicion of possible illegality, does that sufficiently protect legal online speech? Would a standard set that low comply with the UK’s ECHR obligations, to which – whatever this government’s
view of the ECHR may be - the Opposition is committed? Indeed it is sometimes
said that the standard set by the ECHR is <a href="https://bills.parliament.uk/publications/46665/documents/1879" target="_blank">manifest illegality</a>.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">It bears emphasising that these issues around an illegality
duty should have been obvious once an
illegality duty of care was in mind: by the time of the April 2019 White Paper, if not before.
Yet only now are they being given serious consideration. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">It is ironic that in the 12 July Commons debate the most perceptive
comments about how service providers are meant to comply with the illegality
duty were made by Sir Jeremy Wright QC, the former Culture Secretary who launched
the White Paper in April 2019. He said:<o:p></o:p></span></p>
<p class="MsoNormal" style="margin-left: 36pt;"><span style="font-family: georgia;">“When people first look at this
Bill, they will assume that everyone knows what illegal content is and
therefore it should be easy to identify and take it down, or take the
appropriate action to avoid its promotion. But, as new clause 14 makes clear,
what the platform has to do is not just identify content but have reasonable
grounds to infer that all elements of an offence, including the mental
elements, are present or satisfied, and, indeed, that the platform does not
have reasonable grounds to infer that the defence to the offence may be
successfully relied upon. <o:p></o:p></span></p>
<p class="MsoNormal" style="margin-left: 36pt;"><span style="font-family: georgia;">That is right, of course, because
criminal offences very often are not committed just by the fact of a piece of
content; they may also require an intent, or a particular mental state, and
they may require that the individual accused of that offence does not have a
proper defence to it. <o:p></o:p></span></p>
<p class="MsoNormal" style="margin-left: 36pt;"><span style="font-family: georgia;">The question of course is how on
earth a platform is supposed to know either of those two things in each case.
This is helpful guidance, but the Government will have to think carefully about
what further guidance they will need to give—or Ofcom will need to give—in
order to help a platform to make those very difficult judgments.”<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Why did the government not address this fundamental issue at
the start, when a full and proper debate about it could have been had?</span></p><p class="MsoNormal"><span style="font-family: georgia;">This is not the only aspect of the Online Safety Bill that could and should have been fully considered and
discussed at the outset. If the Bill ends up being significantly delayed,
or even taken back to the drawing board, the government has only itself to
blame.</span></p><p class="MsoNormal"></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg2wAQx9wYIkdRlGzcQijxJHC48tqakn16FmU5bBtI7VC8p2rmUNR5WVnGjNWBKiQEIFEcF_YQMa7SPnOBPlzVT6k2i4yb1ORv72hvlNsPFLLajw7OY88mkEOgdzq3f3ybK1msrVujCxGdErqo1XWYUX5rEAdNQUlhxLkkL_pN-GOf8Xb4UwSFonh-W6w/s135/snip2.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="134" data-original-width="135" height="134" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg2wAQx9wYIkdRlGzcQijxJHC48tqakn16FmU5bBtI7VC8p2rmUNR5WVnGjNWBKiQEIFEcF_YQMa7SPnOBPlzVT6k2i4yb1ORv72hvlNsPFLLajw7OY88mkEOgdzq3f3ybK1msrVujCxGdErqo1XWYUX5rEAdNQUlhxLkkL_pN-GOf8Xb4UwSFonh-W6w/s1600/snip2.png" width="135" /></a></div><br /><span style="font-family: georgia;"><br /></span><p></p>Cyberleaglehttp://www.blogger.com/profile/17507190182464072147noreply@blogger.com0tag:blogger.com,1999:blog-229721367671779922.post-19139839263299841232022-03-27T13:52:00.002+01:002022-07-11T21:25:05.020+01:00Mapping the Online Safety Bill<span style="font-family: georgia;">On 17 March 2022 the UK’s <a href="https://bills.parliament.uk/bills/3137" target="_blank">Online Safety Bill</a>, no longer a draft, was introduced into Parliament and had its formal First Reading.<br /><br />Two years ago Carnegie UK <a href="https://www.carnegieuktrust.org.uk/publications/draft-online-harm-bill/" target="_blank">proposed</a> that an Online Harms Reduction Bill could be legislated in twenty clauses. Faced with the 194 clauses and 14 Schedules of the Online Safety Bill, one is half-tempted to hanker after something equally minimal. <br /><br />But only half-tempted. One reason for the Bill’s complexity is the <a href="https://www.cyberleagle.com/2020/06/online-harms-and-legality-principle.html" target="_blank">rule of law requirement</a> that the scope and content of a harm-based duty of care be articulated with reasonable certainty. That requires, among other things, deciding and clearly defining what does and does not count as harm. If no limits are set, or if harm is defined in vague terms, the duty will inevitably be arbitrary. If harm is to be gauged according to the subjective perception of someone who encounters a post or a tweet, that additionally raises the prospect of a <a href="https://www.cyberleagle.com/2019/06/speech-is-not-tripping-hazard-response.html" target="_blank">veto for the most readily offended</a>. <br /><br />These kinds of issue arise for a duty of care as soon as it is extended beyond risk of objectively ascertainable physical injury: the kind of harm for which <a href="https://www.cyberleagle.com/2018/10/take-care-with-that-social-media-duty.html" target="_blank">safety-related duties of care were designed</a>. In short, <a href="https://www.cyberleagle.com/2019/06/speech-is-not-tripping-hazard-response.html" target="_blank">speech is not a tripping hazard</a>. <br /><br />Another source of complexity is that the contemplated duties of care go beyond the ordinary scope of a duty of care – a duty to avoid causing injury to someone – into the exceptional territory of a duty to prevent other people from injuring each other. The greater the extent of a duty of care, the greater the need to articulate its content with reasonable precision. <br /><br />There is no escape from grappling with these problems once we start down the road of trying to apply a duty of care model to online speech. The issues with an amorphous duty of care, and its concomitant impact on internet users’ freedom of expression, inevitably bubbled to the surface as the White Paper consultation proceeded. <br /><br />The government’s response has been twofold: to confine harm to ‘physical or psychological harm’; and to spell out in detail a series of discrete duties with varying content, some based on harm and some on illegality of various kinds. The result, inevitably, is length and complexity. <br /><br />The attempt to fit individual online speech into a legal structure designed for tripping hazards is not, however, the only reason why the Bill is 197 pages longer than Carnegie UK's proposal. <br /><br />Others include:<br /><ul style="text-align: left;"><li><span style="font-family: georgia;">Inclusion of search engines as well as public user-to-user (U2U) platforms and private messaging services.</span></li><li>Exclusion of various kinds of low-risk service.</li><li>The last minute inclusion of non-U2U pornography sites, effectively reviving the stalled and un-implemented Digital Economy Act 2017.</li><li>Inclusion of duties requiring platforms to judge and police certain kinds of illegal user content.</li><li>Implementing a policy agenda to require platforms to act proactively in detecting and policing user content.</li><li>Setting out what kinds of illegal content are in and out of scope.</li><li>Specific Ofcom powers to mandate the use of technology for detecting and removing terrorism and CSEA content. The CSEA powers could affect the ability to use end-to-end encryption on private messaging platforms.</li><li>Children-specific duties, including around age verification and assurance.</li><li>Provisions around fraudulent online advertising, included at the last minute.</li><li>The commitment made in April 2019 by then Culture Secretary Jeremy Wright QC that “where these services are already well regulated, as IPSO and IMPRESS do regarding their members' moderated comment sections, we will not duplicate those efforts. Journalistic or editorial content will not be affected by the regulatory framework.”. The echoes of this promise continue to reverberate, with the government likely to put forward amendments to the Bill during its passage through Parliament.</li><li>Restricting the freedom of large platforms to remove some kinds of ‘high value’ user content.</li><li>Setting out when Ofcom can and cannot require providers to use approved proactive monitoring technology.</li><li>Enactment of several new communications criminal offences</li></ul><b>What does the published Bill now do and how has it changed from the draft Bill? </b><br /><br />At the heart of the Bill are the safety duties:<br /></span><div><ul style="text-align: left;"><li><span style="font-family: georgia;">The illegality safety duty for U2U services </span></li><li><span style="font-family: georgia;">The illegality safety duty for search engines </span></li><li><span style="font-family: georgia;">The “content harmful to adults” safety duty for Category 1 (large high-risk) U2U services </span></li><li><span style="font-family: georgia;">The “content harmful to children” safety duty for U2U services likely to be accessed by children (meaning under-18s) </span></li><li><span style="font-family: georgia;">The “content harmful to children” safety duty for search services likely to be accessed by children </span></li></ul><span style="font-family: georgia;"><b>The illegality safety duties </b><br /><br />The U2U illegality safety duty is imposed on all in-scope user to user service providers (an estimated 20,000 micro-businesses, 4,000 small and medium businesses and 700 large businesses. Those also include 500 civil society organisations). It is not limited to high-profile social media platforms. It could include online gaming, low tech discussion forums and many others. <br /><br />The structure shown in the diagram below follows a pattern common to all five duties: a preliminary risk assessment duty which underpins and, to an extent, feeds into the substantive safety duty.<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhgysEYuzXUcGPQeXKYO6biUlrOfzjq39zpcRRrDa5NqSX3SOqtKM9DvWw-_OPr-VvuLKvQCO6xAGFCC7yZ1_qlj4VpxjHGptvZEGp1j20zKjDn8n7q_PCYQ9WQwObgjL5sQIdns4jG90tfflhS-xeKkq3cTqI_YECF-4TP4s_gp8la0K5EHwRzDrOHZg/s1350/Online%20SafetyBill01%20-%20U2U%20illegal.bmp" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="940" data-original-width="1350" height="279" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhgysEYuzXUcGPQeXKYO6biUlrOfzjq39zpcRRrDa5NqSX3SOqtKM9DvWw-_OPr-VvuLKvQCO6xAGFCC7yZ1_qlj4VpxjHGptvZEGp1j20zKjDn8n7q_PCYQ9WQwObgjL5sQIdns4jG90tfflhS-xeKkq3cTqI_YECF-4TP4s_gp8la0K5EHwRzDrOHZg/w400-h279/Online%20SafetyBill01%20-%20U2U%20illegal.bmp" width="400" /></a></div><br />The role played by “Priority Illegal Content” (the red boxes) is key to the safety duty. This kind of content triggers the proactive monitoring obligations in S9(3)(a) and (b): to prevent users encountering such content and to minimise the length of time for which any such content is present. This has a “predictive policing” element, since illegal content includes content that would be illegal content if it were, hypothetically, on the service. <br /><br />The countervailing S.19 duties to have regard to the importance of protecting users’ freedom of expression within the law and in relation to users’ privacy rights (which now also mentions data protection) are off-diagram. <br /><br />For the U2U illegality safety duty the Bill has made several significant changes compared with the draft Bill:<br /><ul style="text-align: left;"><li><span style="font-family: georgia;">An initial list of Priority criminal offences is included in Schedule 7 of the Bill. Previously any offences beyond terrorism and CSEA were to be added by secondary legislation. Schedule 7 can be amended by secondary legislation. </span></li><li><span style="font-family: georgia;">Greater emphasis on proactive monitoring and technology. The new ‘design and operation’ element of the risk assessment expressly refers to proactive technology. Ofcom’s safety duty enforcement powers (which in the draft Bill did not permit Ofcom to require use of proactive technology) now allow it to do so in support of the S.9(2) and 9(3) duties, for publicly communicated illegal content. </span></li><li><span style="font-family: georgia;">The draft Bill set the duty-triggering threshold as the service provider having “reasonable grounds to believe” that the content is illegal. That has now gone. <span style="color: red;">[But a government amendment at Report Stage is introducing "reasonable grounds to infer".]</span></span></li></ul></span><span style="font-family: georgia;">The problem with the “reasonable grounds to believe” <span style="color: red;">or similar </span>threshold <strike>was</strike> <span style="color: red;">is </span>that it expressly bake<span style="color: red;">s</span><strike>d</strike> in over-removal of lawful content. <strike>Although that has now been dropped, the Bill offers no replacement. It is silent on how clear it must be that the content is illegal in order to trigger the illegality duty.</strike></span></div><div><span style="font-family: georgia;"><br />This illustrates the underlying dilemma that arises with imposing removal duties on platforms: set the duty threshold low and over-removal of legal content is mandated. Set the trigger threshold at actual illegality and platforms are thrust into the role of judge, but without the legitimacy or contextual information necessary to perform the role; and certainly without the capability to perform it at scale, proactively and in real time. Apply the duty to subjectively perceived speech offences (such as the <a href="https://www.cyberleagle.com/2021/11/licence-to-chill.html" target="_blank">new harmful communications offence</a>) and the task becomes impossible. <br /><br />This kind of consideration is why <a href="https://www.cyberleagle.com/2017/05/time-to-speak-up-for-article-15.html" target="_blank">Article 15 of the EU eCommerce Directive</a> prohibits Member States from imposing general monitoring obligations on online intermediaries: not for the benefit of platforms, but for the protection of users. </span></div><div><span style="font-family: georgia;"><br /></span></div><div><span style="font-family: georgia;">Post-Brexit the UK is free to depart from Article 15 if it so wishes. In January 2021 the government expressly abandoned its previous policy of maintaining alignment with Article 15. The Bill includes a long list of ‘priority illegal content’ which U2U providers will be expected to proactively to detect and remove, backed up with Ofcom’s new enforcement powers to require use of proactive technology.</span></div><div><span style="font-family: georgia;"><br />Perhaps the most curious aspects of the U2U illegality risk assessment and safety duties are the yellow boxes. These are aspects of the duties that refer to “harm” (defined as physical or psychological harm). Although they sit within the two illegality duties, none of them expressly requires the harm to be caused by, arise from or be presented by illegality – only ‘identified in’ the most recent illegal content risk assessment. <br /><br />It is hard to imagine that the government intends these to be standalone duties divorced from illegality, since that would amount to a substantive ‘legal but harmful’ duty, which the government has disclaimed any intention to introduce. Nevertheless, the presumably intended dependence of harm on illegality could be put beyond doubt.<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhbujstCcjn55v5NbfVqw5yn1Xo5UMLReSSrWdNYffMnSNOiNrqXIfAh_8RigoAMSe2yMZq3ZhrnaF8ok2JRVhQCtaqazzMWuGw8m3mZkuVPbas81q-khU5WVPenNEj7DcJ6NNoS17fd3k7E2scc24Uy08r_vqeFU6nDZQTUWDpQREPQAWnMTcYw6V3Ug/s1355/Online%20SafetyBill03%20-%20Search%20illegal.bmp" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="940" data-original-width="1355" height="278" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhbujstCcjn55v5NbfVqw5yn1Xo5UMLReSSrWdNYffMnSNOiNrqXIfAh_8RigoAMSe2yMZq3ZhrnaF8ok2JRVhQCtaqazzMWuGw8m3mZkuVPbas81q-khU5WVPenNEj7DcJ6NNoS17fd3k7E2scc24Uy08r_vqeFU6nDZQTUWDpQREPQAWnMTcYw6V3Ug/w400-h278/Online%20SafetyBill03%20-%20Search%20illegal.bmp" width="400" /></a></div><br />For comparison, above are the corresponding illegality duties applicable to search services. They are based on the U2U illegality duties, adapted and modified for search. The same comment about facially self-standing “harm” duties can be made as for the U2U illegality duties. <br /><br /><b>Harmful content duties<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgBIOg0ti2KNNmpoH3bD7M8Qeltwxl2wZHIaamoQRo6sDaWB2Lixa1jGmYOKlySu_95IZDsUjANg5EYhWOTG57uGLHZxVZ3KcpGpPpCv40PYk-zzIMZ6Qu8OOHD2eYz57hX2EgLPIa_0fPFia7ZmiV01Q6rn1qnjeOEYR9ScAVxVFohJaBgAu4rWqpYmA/s1366/Online%20SafetyBill02%20-%20U2U%20adults%20harmful.bmp" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="941" data-original-width="1366" height="275" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgBIOg0ti2KNNmpoH3bD7M8Qeltwxl2wZHIaamoQRo6sDaWB2Lixa1jGmYOKlySu_95IZDsUjANg5EYhWOTG57uGLHZxVZ3KcpGpPpCv40PYk-zzIMZ6Qu8OOHD2eYz57hX2EgLPIa_0fPFia7ZmiV01Q6rn1qnjeOEYR9ScAVxVFohJaBgAu4rWqpYmA/w400-h275/Online%20SafetyBill02%20-%20U2U%20adults%20harmful.bmp" width="400" /></a></div><br /></b>The safety duty that has attracted most debate is the “content harmful to adults” duty, for the reason that it imposes duties in relation to legal content. It applies only to platforms designated as Category 1: those considered to be high risk by reason of size and functionality. <br /><br />Critics argue that the Bill should not trespass into areas of legal speech, particularly given the subjective terms in which “content harmful to adults” is couched. The government’s position has always been that the duty was no more than a transparency duty, under which platforms would be at liberty to permit content harmful to adults on their services so long as they are clear about that in their terms and conditions. The implication was that a platform was free to take no steps about such content, although whether the wording of the draft Bill achieved that was debatable. <br /><br />The Bill makes some significant changes, which can be seen in the diagram.<br /><ul style="text-align: left;"><li><span style="font-family: georgia;">It scraps the previous definition of non-designated ‘content harmful to adults’, consigning the “Adult of Ordinary Sensibilities” and its progeny to the oblivion of pre-legislative history. </span></li><li><span style="font-family: georgia;">In its place, non-designated content harmful to adults is now defined as “content of a kind which presents a material risk of significant harm to an appreciable number of adults in the UK”. Harm means physical or psychological harm, as elaborated in S.187. </span></li><li><span style="font-family: georgia;">All risk assessment and safety duties now relate only to “priority content harmful to adults”, which will be designated in secondary legislation. The previous circularly-drafted regulation-making power has been tightened up.</span></li><li><span style="font-family: georgia;">The only duty regarding non-designated content harmful to adults is to notify Ofcom if it turns up in the provider’s risk assessment. </span></li><li><span style="font-family: georgia;">The draft Bill’s duty to state how harmful to adults is ‘dealt with’ by the provider is replaced by a provision stipulating that if any priority harmful content is to be ‘treated’ in one of four specified ways, the T&Cs must state for each kind of such content which of those is to be applied. (That, at least, appears to be what is intended.) </span></li><li><span style="font-family: georgia;">As with the other duties, the new ‘design and operation’ element of the risk assessment expressly refers to proactive technology. However, unlike the other duties Ofcom’s newly extended safety duty enforcement powers do not permit Ofcom to require use of proactive technology in support of the content harmful to adults duties. </span></li><li><span style="font-family: georgia;">The Bill introduces a new ‘User Empowerment Duty’. This would require Category 1 providers to prove users with tools enabling them (if they so wish) to be alerted to, filter and block priority content harmful to adults. </span></li></ul>The four kinds of treatment of priority content harmful to adults that can trigger a duty to specify which is to be applied are: taking down the content, restricting users’ access to the content, limiting the recommendation or promotion of the content, or recommending or promoting the content. It can be seen that the first three are all restrictive measures, whereas the fourth is the opposite. <br /><br />The two remaining safety duties are the ‘content harmful to children’ duties. Those apply respectively to user-to-user services and search services likely to be accessed by children. Such likelihood is determined by the outcome of a ‘children’s access assessment’ that an in-scope provider must carry out. <br /><br />For U2U services, the duties are:<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiW83j65VateQ_vqWMk0nUp9BZAycolhKUYQsPFWlxwy82RJcXaYBgt_WnIme8D8ClMkN7wWqgAvcL6qB95k_5g2BKzXnifdRIMZmgGoCSZixbP2jWcbglLJBAzgCeEox9yBvE4EAy9PQxKY5HzeBfK7rYrLBojgVpaDfuWtK9Cm8auWpX980mDOseFWQ/s1352/Online%20SafetyBill02%20-%20U2U%20children%20harmful.bmp" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="944" data-original-width="1352" height="279" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiW83j65VateQ_vqWMk0nUp9BZAycolhKUYQsPFWlxwy82RJcXaYBgt_WnIme8D8ClMkN7wWqgAvcL6qB95k_5g2BKzXnifdRIMZmgGoCSZixbP2jWcbglLJBAzgCeEox9yBvE4EAy9PQxKY5HzeBfK7rYrLBojgVpaDfuWtK9Cm8auWpX980mDOseFWQ/w400-h279/Online%20SafetyBill02%20-%20U2U%20children%20harmful.bmp" width="400" /></a></div><br />These duties are conceptually akin to the ‘content harmful to adults’ duty, except that instead of focusing on transparency they impose substantive preventive and protective obligations. Unlike the previously considered duties these have three, rather than two, levels of harmful content: Primary Priority Content, Priority Content and Non-Designated Content. The first two will be designated by the Secretary of State in secondary legislation. The definition of harm is the same as for the other duties. <br /><br />The corresponding search service duty is:<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgVB4y1ogpYxQoCD1rXgLQWgjiUPBqhvAgA5WwXzyHaS6d1Lkz_-ILM0tgY1tS8Mgj_3HjnU64zidmuBcadGvhqtHl0JxSiG5NgkmDuzTikxHcy3sluZxwaksE2i1MS9XlCLXPmBLuerHIDl8V8RYc14zRrJDEfrCXbwYpnynDS6XM27zi7d_GEAZQSCQ/s1352/Online%20SafetyBill02%20-%20Search%20children%20harmful.bmp" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="944" data-original-width="1352" height="279" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgVB4y1ogpYxQoCD1rXgLQWgjiUPBqhvAgA5WwXzyHaS6d1Lkz_-ILM0tgY1tS8Mgj_3HjnU64zidmuBcadGvhqtHl0JxSiG5NgkmDuzTikxHcy3sluZxwaksE2i1MS9XlCLXPmBLuerHIDl8V8RYc14zRrJDEfrCXbwYpnynDS6XM27zi7d_GEAZQSCQ/w400-h279/Online%20SafetyBill02%20-%20Search%20children%20harmful.bmp" width="400" /></a></div></span><span style="font-family: georgia;"><br /><b>Fraudulent advertising </b><br /><br />The draft Bill’s exclusion of paid-for advertising is replaced by specific duties on Category 1 U2U services and Category 2A search services in relation to fraudulent advertisements. The main duties are equivalent to the S.9(a) to (c) and S.24(3) safety duties applicable to priority illegal content. <br /><br /><b>Pornography sites</b> </span></div><div><span style="font-family: georgia;"><br /></span></div><div><span style="font-family: georgia;">The Bill introduces a new category of ‘own content’ pornography services, which will be subject to their own separate regime separate from user-generated content. On-demand programme services already regulated under the Communications Act 2003 are excluded.<br /></span><span style="font-family: georgia;"><br /><b>Excluded news publisher content </b><br /><br />The Bill, like the draft Bill before it, <a href="https://www.cyberleagle.com/2021/06/carved-out-or-carved-up-online-safety.html">excludes ‘news publisher content’ from the scope of various provider duties.</a> This means that a provider’s duties do not apply to such content. That does not prevent a provider’s actions taken in pursuance of fulfilling its duties from affecting news publisher content. News media organisations have been pressing for more protection in that respect. It seems likely that the government will bring forward an amendment during the passage of the Bill. According to <a href="https://www.dailymail.co.uk/news/article-10621259/Nadine-Dorries-preserve-freedom-Press-online.html">one report</a> that may require platforms to notify the news publisher before taking action. <br /><br />The scheme for excluding news publisher content, together with the express provider duties in respect of freedom of expression, journalistic content and content of democratic importance (CDI), is shown in the diagram below.<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhRB0KFRt_Gcys7s_nH4AHCkS2NXGqzWniTag3z35aap1S-JLDA2Zx1tr96mtOylBuXqqprQ6DTOmTaOOebgQFzy3aRU9hhJapGuZ_a-JwsFjzLqayunmIsY6NKPHUeMbopZrWBGwO5ZCTKg-nOwwUjomZjOabRuWlLPvOolGvp8imTUiSXm8DbiIOShw/s1349/Press%20journalism%20Online%20Safety%20Bill%20as%20introduced.bmp" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="934" data-original-width="1349" height="278" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhRB0KFRt_Gcys7s_nH4AHCkS2NXGqzWniTag3z35aap1S-JLDA2Zx1tr96mtOylBuXqqprQ6DTOmTaOOebgQFzy3aRU9hhJapGuZ_a-JwsFjzLqayunmIsY6NKPHUeMbopZrWBGwO5ZCTKg-nOwwUjomZjOabRuWlLPvOolGvp8imTUiSXm8DbiIOShw/w400-h278/Press%20journalism%20Online%20Safety%20Bill%20as%20introduced.bmp" width="400" /></a></div><br />The most significant changes over the draft Bill are:<br /><ul style="text-align: left;"><li><span style="font-family: georgia;">The CDI duty is now to ensure that systems and processes apply to a “wide” diversity of political opinions in the same way. </span></li><li><span style="font-family: georgia;">The addition of a requirement on all U2U providers to inform users in terms of service about their right to bring a claim for breach of contract if their content is taken down or restricted in breach of those terms.</span></li></ul><b>Criminal offence reform </b><br /><br />Very early on, in 2018, the government asked the Law Commission to review the communications offences – chiefly the notorious S.127 of the Communications Act 2003 and the Malicious Communications Act 1988. <br /><br />It is open to question whether the government quite understood at that time that S.127 was more restrictive than any offline speech law. Nevertheless, there was certainly a case for reviewing the criminal law to see whether the online environment merited any new offences, and to revise the existing communications offences. The Law Commission also conducted a review of hate crime legislation, which the government is considering. <br /><br />The Bill includes four new criminal offences - harmful communications, false communications, threatening communications, and a ‘cyberflashing’ offence. Concomitantly, it would repeal S.127 and the 1988 Act. <br /><br />Probably the least (if at all) controversial is the cyberflashing offence (albeit <a href="https://news.sky.com/story/new-law-banning-cyberflashing-to-be-included-in-online-safety-bill-12564666" target="_blank">some will say</a> that the requirements to prove intent or the purpose for which the image is sent set too high a bar). <br /><br />The threatening communications offence ought to be uncontroversial. However, the Bill adopts different wording from the <a href="https://www.lawcom.gov.uk/project/reform-of-the-communications-offences/">Law Commission’s recommendation</a>. That focused on threatening a particular victim (the ‘object of the threat’, in the Law Commission’s language). The Bill’s formulation may broaden the offence to include something more akin to use of threatening language that might be encountered by anyone who, upon reading the message, could fear that the threat would be carried out (whether or not against them). <br /><br />It is unclear whether this is an accident of drafting or intentional widening. The Law Commission emphasised that the offence should encompass only genuine threats: “In our view, requiring that the defendant intend or be reckless as to whether the <i>victim of the threat</i> would fear that the defendant would carry out the threat will ensure that only “genuine” threats will be within the scope of the offence.” (emphasis added) It was on this basis that the Law Commission considered that another Twitter Joke Trial scenario would not be a concern. <br /><br />The harmful communications offence suffers from problems which the Law Commission itself did not fully address. It is the Law Commission’s proposed replacement for S.127(1) of the Communications Act 2003. <br /><br />When discussing the effect of the ‘legal but harmful’ provisions of the Bill the Secretary of State said: “This reduces the risk that platforms are incentivised to over-remove legal material ... because they are put under pressure to do so by campaign groups or individuals who claim that controversial content causes them psychological harm.” <br /><br />However, the harmful communications offence is cast in terms that create just that risk under the illegality duty, via someone inserting themselves into the ‘likely audience’ and alerting the platform (explained in this <a href="https://www.cyberleagle.com/2021/11/licence-to-chill.html">blogpost</a> and <a href="https://twitter.com/cyberleagle/status/1504096890400432132">Twitter thread</a>). The false communications offence also makes use of ‘likely audience’, albeit not as extensively as the harmful communications offence. <br /><br /><b>Secretary of State powers</b><br /><br />The draft Bill empowered the Secretary of State to send back a draft Code of Practice to Ofcom for modification to reflect government policy. This extraordinary provision attracted universal criticism. It has now been replaced by a power to direct modification “for reasons of public policy”. This is unlikely to satisfy critics anxious to preserve Ofcom's independence.</span></div><div><span style="font-family: georgia;"><br /></span></div><div><span style="font-family: georgia;"><b>Extraterritoriality </b><br /><br />The Bill maintains the previous enthusiasm of the draft Bill to legislate for the whole world. <br /><br />The safety duties adopt substantially the same expansive definition of ‘UK-linked’ as previously: (a) a significant number of UK users; or (b) UK users form one of the target markets for the service (or the only market); or (c) there are reasonable grounds to believe that there is a material risk of significant harm to individuals in the UK presented by user-generated content or search content, as appropriate for the service. <br /><br />Whilst a targeting test is a reasonable way of capturing services provided to UK users from abroad, the third limb verges on ‘mere accessibility’. That suggests jurisdictional overreach. As to the first limb, the Bill says nothing about how ‘significant’ should be evaluated. For instance, is it an absolute measure or to be gauged relative to the size of the service? Does it mean ‘more than insignificant’, or does it connote something more? <br /><br />The new regime for own-content pornography sites adopts limbs (a) and (b), but omits (c). <br /><br />The Bill goes on to provide that the duties imposed on user-to-user and search services extend only to (a) the design, operation and use of the service in the United Kingdom, and (b) in the case of a duty expressed to apply in relation to users of a service, its design, operation and use as it affects UK users. The own-content pornography regime adopts limb (a), but omits (b). <br /><br />The new communications offences apply to an act done outside the UK, but only if the act is done by an individual habitually resident in England and Wales or a body incorporated or constituted under the law of England and Wales. It is notable that under the illegality safety duty: “for the purposes of determining whether content amounts to an offence, no account is to be taken of whether or not anything done in relation to the content takes place in any part of the United Kingdom.” The effect appears to be to deem user content to be illegal for the purposes of the illegality safety duty, regardless of whether the territoriality requirements of the substantive offence are satisfied.</span></div><div><span style="font-family: georgia;"><br /><b>Postscript </b><br /><br /><a href="https://www.cyberleagle.com/2020/06/online-harms-revisited.html" target="_blank">Some time ago I ventured</a> that if the road to hell was paved with good intentions, this was a motorway. The government continues to speed along the duty of care highway. <br /><br />It may seem like overwrought hyperbole to suggest that the Bill lays waste to several hundred years of fundamental procedural protections for speech. But consider that the presumption against prior restraint appeared in <i>Blackstone’s Commentaries</i> (1769). It endures today in <a href="https://www.cyberleagle.com/2017/10/towards-filtered-internet-european.html#PresumedIllegal" target="_blank">human rights law</a>. That presumption is overturned by legal duties that require proactive monitoring and removal before an independent tribunal has made any determination of illegality. <br /><br />It is not an answer to say, as the government is inclined to do, that the duties imposed on providers are about <a href="https://www.cyberleagle.com/2021/11/the-draft-online-safety-bill-systemic.html" target="_blank">systems and processes rather than individual items of content</a>. For the user whose tweet or post is removed, flagged, labelled, throttled, capped or otherwise interfered with as a result of a duty imposed by this legislation, it is only ever about individual items of content. </span></div><div><span style="font-family: georgia;"><br /></span></div><div><span style="font-family: georgia;">[Amended 11 July 2022 to take account of government's proposed Report Stage amendment to illegality duties on "reasonable grounds to infer".]<br /></span><span style="font-family: georgia;"><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg12tR46FGsxvwjc-YdjXQzAqwCSfUSau00I8ts0UpVv4sLx_98pFa2iajZBfPbgkgCz30I9MpLgam0bPdngQtQboWK-RumW_Fq6TmCe1FAf4YZxhhCXTyhZhBfYZzeE8kJ5I7LG5LQc9VoEUTwM1mZ_ZUdhk0EQIyxkJQbVKsKCqNaDNodXlqg5S-yDg/s135/snip2.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="134" data-original-width="135" height="134" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg12tR46FGsxvwjc-YdjXQzAqwCSfUSau00I8ts0UpVv4sLx_98pFa2iajZBfPbgkgCz30I9MpLgam0bPdngQtQboWK-RumW_Fq6TmCe1FAf4YZxhhCXTyhZhBfYZzeE8kJ5I7LG5LQc9VoEUTwM1mZ_ZUdhk0EQIyxkJQbVKsKCqNaDNodXlqg5S-yDg/s1600/snip2.png" width="135" /></a></div><br /></span></div>Cyberleaglehttp://www.blogger.com/profile/17507190182464072147noreply@blogger.com0tag:blogger.com,1999:blog-229721367671779922.post-1542442568791690812022-02-19T17:31:00.002+00:002022-02-19T21:41:56.617+00:00Harm Version 4.0 – the Online Safety Bill in metamorphosis<p><span style="font-family: georgia;">It is time – in fact it is overdue - to take
stock of the increasingly imminent Online Safety Bill. The two months before
and after Christmas saw a burst of activity: Reports from the <a href="https://committees.parliament.uk/committee/534/draft-online-safety-bill-joint-committee/news/159784/no-longer-the-land-of-the-lawless-joint-committee-reports/" target="_blank">Joint Parliamentary Committee</a> scrutinising the draft Bill, from the <a href="https://committees.parliament.uk/publications/8608/documents/86960/default/" target="_blank">Commons DCMS Committee</a> on the ‘Legal but Harmful’ issue, and from the <a href="https://committees.parliament.uk/publications/8669/documents/89002/default/" target="_blank">House of Commons Petitions Committee</a> on Tackling Online Abuse.</span></p><p class="MsoNormal"><span style="font-family: georgia;"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Several Parliamentary debates took place, and recently the DCMS
made two announcements: first, that <a href="https://www.gov.uk/government/news/online-safety-law-to-be-strengthened-to-stamp-out-illegal-content" target="_blank">an extended list of priority illegal content</a> would be enacted on the face of the legislation, as would the Law
Commission’s recommendations for three modernised communications offences; and second,
that <a href="https://www.gov.uk/government/news/world-leading-measures-to-protect-children-from-accessing-pornography-online" target="_blank">age verification</a> would be extended to apply to non-user-to-user pornography
sites. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Most recently of all, the Home Secretary is reported to have
gained Cabinet support for powers for Ofcom (the regulator that would implement,
supervise and enforce the Bill’s provisions) to require use of technology to
proactively seek out and remove illegal content and legal content harmful to
children.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">As the government’s proposals have continued to evolve under
the guidance of their sixth Culture Secretary, and with Parliamentary
Committees and others weighing in from all directions, you may already be floundering
if you have not followed, blow by blow, the progression from the 2017 <a href="https://www.gov.uk/government/consultations/internet-safety-strategy-green-paper" target="_blank">Internet Safety Strategy Green Paper</a>, via the April 2019 <a href="https://www.gov.uk/government/consultations/online-harms-white-paper" target="_blank">Online Harms White Paper</a> and the
May 2021 <a href="https://www.gov.uk/government/publications/draft-online-safety-bill" target="_blank">draft Online Safety Bill</a>, to the recent bout of political jousting. <o:p></o:p></span></p><p class="MsoNormal"><span style="font-family: georgia;">If you are already familiar with the legal concept of a duty of care, the significance of objective versus subjective harms, the distinction between a duty to avoid causing injury and a duty to prevent others causing injury, and the notion of safety by design, then read on. If not, or if you would like a recap, it’s all in the Annex.</span></p>
<p class="MsoNormal"><span style="font-family: georgia;">In brief, the draft Bill would impose a new set of legal
obligations on an estimated 24,000 UK providers of user to user services (everyone
from large social media platforms to messaging services, multiplayer online
games and simple discussion forums) and search engines. The government calls
these obligations a duty of care.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">This post is an unashamedly selective attempt to put in
context some of the main threads of the government’s thinking, explain key
elements of the draft Bill and pick out a few of the most significant Parliamentary
Committee recommendations.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;"><b>The government’s thinking</b> The proposals bundle together
multiple policy strands. Those include:</span></p><p class="MsoNormal"></p><ul style="text-align: left;"><li><span style="text-indent: -18pt;"><span style="font-family: georgia;">Requiring providers to take steps to prevent,
inhibit or respond to illegal user content</span></span></li><li><span style="font-family: georgia;">Requiring providers to take action in respect of
‘legal but harmful’ user content</span></li><li><span style="font-family: georgia;">Limiting the freedom of large social media
platforms to decide which user content should and should not be on their
services.</span></li></ul><p></p>
<p class="MsoNormal"><span style="font-family: georgia;">The government also proposes to enact new and reformed
criminal offences for users. These are probably the most coherent aspects of
the proposed legislation, yet still have some serious problems – in their own right, in the case of the <a href="https://www.cyberleagle.com/2021/11/licence-to-chill.html" target="_blank">new harm-based offence</a>, and
also in how offences interact with the illegality strand of the
duty of care.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Protection of children has been a constant theme, sparking
debates about age verification, age assurance and end-to-end encryption. Overall,
the government has pursued its quest for online safety under the Duty of Care
banner, bolstered with the slogan “What Is Illegal Offline Is Illegal Online”. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">That slogan, to be blunt, has no relevance to the draft
Bill. Thirty years ago there may have been laws that referred to paper, post,
or in some other way excluded electronic communication and online activity.
Those gaps were plugged long ago. With
the exception of election material imprints (a gap that is being fixed by a different
Bill currently going through Parliament), there are no criminal offences that
do not already apply online (other than jokey examples like driving a
car without a licence). <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">On the contrary, the draft Bill’s Duty of Care would create
<a href="https://www.cyberleagle.com/2018/10/take-care-with-that-social-media-duty.html" target="_blank">novel obligations for both illegal and legal content that have no comparable counterpart offline</a>. The arguments for these duties rest in reality on the premise that the
internet and social media are different from offline, not that we are trying to
achieve offline-online equivalence.</span></p>
<p class="MsoNormal"><b><span style="font-family: georgia;">Strand 1: Preventing and Responding to Illegality<o:p></o:p></span></b></p>
<p class="MsoNormal"><span style="font-family: georgia;">Under the draft Bill, all 24,000 in-scope UGC providers would
be placed under a duty of care (so-called) in respect of illegal user content. The duty would be reactive or proactive,
depending on the kind of illegality involved. Illegality for this purpose means
criminal offences.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The problem with applying the duty of care label to this obligation
is that there is no necessary connection between safety (in the duty of care
sense of risk of personal injury) and illegality. Some criminal law is
safety-related and some is not. We may be tempted to talk of being made safe
from illegality, but that is not safety in its proper duty of care sense.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">In truth, the illegality duty appears to stem not from any legal
concept of a duty of care, but from a broader argument that platforms have a moral
responsibility to take positive steps to prevent criminal activity by users on
their services. That contrasts with merely being incentivised to remove user
content on becoming aware that it is unlawful. The latter is the position of a
host under the <a href="https://www.cyberleagle.com/2018/04/the-electronic-commerce-directive.html" target="_blank">existing intermediary liability regime</a>, with which the proposed
positive legal duty would co-exist.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">That moral framing may explain why the DCMS Minister was
able to say to a recent Parliamentary Committee:<o:p></o:p></span></p>
<p class="MsoNormal" style="margin-left: 36pt;"><span style="font-family: georgia;">“I think there is absolute unanimity
that the Bill’s position on that is the right position: if it is illegal
offline it is illegal online and there should be a duty on social media firms
to stop it happening. There is agreement on that.” (1 Feb 2022, Commons DCMS
Sub-Committee on Online Harms and Disinformation)<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">It is true that the illegality safety duty has received relatively
little attention compared with the furore over the draft Bill’s ‘legal but
harmful’ provisions. Even then, the consensus to which the Minister alludes may
not be quite so firm. It may seem obvious that illegal content should be
removed, but that overlooks the fact that the draft Bill would require removal
without any independent adjudication of illegality. That contradicts the
presumption against prior restraint that forms a core part of traditional procedural
protections for freedom of expression. To
the extent that the duty requires hosts to monitor for illegality, that departs
from the long-standing principle embodied in <a href="https://www.cyberleagle.com/2017/05/time-to-speak-up-for-article-15.html" target="_blank">Article 15 of the eCommerce Directive</a> prohibiting the imposition of general monitoring obligations.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">It is noteworthy that the DCMS Committee Report recommends ([21])
that takedown should not be the only option to fulfil the illegality safety
duty, but measures such as tagging should be available.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">So an unbounded notion of preventing illegality does not sit
well on the offline duty of care foundation of risk of physical injury. Difficult
questions arise as a result. Should the duty apply to all kinds of criminal
offence capable of being committed online? Or, more closely aligned with offline
duties of care, should it be limited strictly to safety-related criminal
offences? Or perhaps to risk of either physical injury or psychological harm?
Or, more broadly, to offences for which it can be said that the individual is a
victim? <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The extent to which over time the government’s proposals have
fluctuated between several of these varieties of illegality perhaps reflects
the difficulty of shoehorning this kind of duty into a legal box labelled ‘duty
of care’. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Moving on from the scope of illegality, what would the draft
Bill require U2U providers to do? Under the draft Bill, for ‘ordinary’ illegal
content the safety duty would be reactive – to remove it on receiving notice.
For ‘priority’ illegal content the duty would in addition be preventative: as
the DCMS described it in their recent announcement of new categories of
priority illegal content:<o:p></o:p></span></p>
<p class="MsoNormal" style="margin-left: 36pt;"><span style="font-family: georgia;">“To proactively tackle the
priority offences, firms will need to make sure the features, functionalities
and algorithms of their services are designed to prevent their users
encountering them and minimise the length of time this content is available.
This could be achieved by automated or human content moderation, banning
illegal search terms, spotting suspicious users and having effective systems in
place to prevent banned users opening new accounts.”<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">These kinds of duty prompt questions about how a platform is
to decide what is and is not illegal, or (apparently) who is a suspicious user.
The draft Bill provides that the illegality duty should be triggered by
‘reasonable grounds to believe’ that the content is illegal. It could have adopted
a much higher threshold: manifestly illegal on the face of the content, for
instance. The lower the threshold, the greater the likelihood of legitimate
content being removed at scale, whether proactively or reactively.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The draft Bill raises serious (and already well-known, in
the context of existing intermediary liability rules) concerns of likely
over-removal through mandating platforms to detect, adjudge and remove illegal
material on their systems. Those are exacerbated by adoption of the ‘reasonable
grounds to believe’ threshold.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;"><b>Current state of play</b> The government’s newest list of
priority offences (those to which the proactive duty would apply) mostly
involves individuals as victims but also includes money laundering, an offence which
does not do so. The list includes revenge and extreme pornography, as to which
the Joint Scrutiny Committee observed that the first is an offence against
specific individuals, whereas the second is not. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Given how broadly the priority offences are now ranging, it
may be a reasonable assumption that the government does not intend to limit
them to conduct that would carry a risk of physical or psychological harm to a
victim.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The government intends that its extended list of priority
offences would be named on the face of the Bill. That goes some way towards
meeting criticism by the Committees of leaving that to secondary legislation.
However, the government has not said that the power to add to the list by
secondary legislation would be removed. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">As to the threshold that would trigger the duty, the Joint Scrutiny
Committee has said that it is content with ‘reasonable grounds to believe’
so long as certain safeguards are in place that would render the duty
compatible with an individual’s right to free speech; and so long as service
providers are required to apply the test in a proportionate manner set out in
clear and accessible terms to users of the service. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The Joint Committee’s specific suggested safeguard is that Ofcom
should issue a binding Code of Practice on identifying, reporting on and acting
on illegal content. The Committee considers that Ofcom’s own obligation to
comply with human rights legislation would provide an additional safeguard for
freedom of expression in how providers fulfil this requirement. How much comfort
one should take from that, when human rights legislation sets only the outer boundaries of acceptable conduct by the state, is debatable.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The Joint Committee also refers to other safeguards proposed
elsewhere in its report. Identifying exactly which it is referring to in the
context of illegality is not easy. Most probably, it is referring to those
listed at [284], at least insofar as they relate to the illegality safety duty.
<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The Committee proposes these as a more effective alternative
to strengthening the ‘have regard to the importance of freedom of expression’
duty in Clause 12 of the draft Bill:</span></p><p class="MsoNormal"></p><ul style="text-align: left;"><li><span style="text-indent: -18pt;"><span style="font-family: georgia;">greater independence for Ofcom ([377])</span></span></li><li><span style="font-family: georgia;">routes for individual redress beyond service
providers ([457])</span></li><li><span style="font-family: georgia;">tighter definitions around content that creates
a risk of harm ([176] (adults), [202] (children))</span></li><li><span style="font-family: georgia;">a greater emphasis on safety by design ([82])</span></li><li><span style="font-family: georgia;">a broader requirement to be consistent in the
applications of terms of service</span></li><li><span style="font-family: georgia;">stronger minimum standards ([184])</span></li><li><span style="font-family: georgia;">mandatory codes of practice set by Ofcom, who
are required to be compliant with human rights law (generally [358]; illegal
content [144]; content in the public interest [307])</span></li><li><span style="font-family: georgia;">stronger protections for news publisher content
([304])</span></li></ul><p></p>
<p class="MsoNormal"><span style="font-family: georgia;">It is not always obvious how some of these recommendations
(such as increased emphasis on safety by design) qualify as freedom of
expression safeguards.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">For its part, the DCMS Committee has suggested ([12]) that
the definition of illegal content should be reframed to explicitly add the need
to consider context as a factor. How providers should go about obtaining such
contextual information - much of which will be outside the contents of user
posts – is unclear. The recommendation also has implications in the
degree of surveillance and breadth of analysis of user communications that
would be necessary to fulfil the duty. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;"><b>Content in the public interest</b> The Joint Committee
recommends a revised approach to the draft Bill’s protections for journalistic
content and content of democratic importance. ([307]) At present these
qualifications to the illegality and legal but harmful duties would apply only
to Category 1 service providers. However, the Committee also recommends (at
[246]) replacing strict categories based on size and functionality with a
risk-based sliding scale, which would determine which statutory duties apply to
which providers. (The government has told the Petitions Committee that it is
considering changing the Category 1 qualification from size <i>and</i> functionality to size <i>or</i> functionality.)<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The Joint Committee relies significantly on this
recommendation, under the heading of ‘protecting high value speech’. It
proposes to replace the existing journalism and content of democratic
importance protections with a single statutory requirement to have
proportionate systems and processes to protect ‘content where there are
reasonable grounds to believe it will be in the public interest’ ([307]). It
gives the examples of journalistic content, contributions to political or
societal debate and whistleblowing as being likely to be in the public
interest. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Ofcom would be expected to produce a binding Code of
Practice on steps to be taken to protect such content and guidance on what is
likely to be in the public interest, based on their existing experience and
caselaw.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">As with the existing proposed protections, the ‘public
interest’ proposal appears to be intended to apply across the board to both
illegality and legal but harmful content (see, for instance, the Committee’s
discussion at [135] in relation to the Law Commission’s proposed new
‘harm-based’ communications offence). </span><span style="font-family: georgia;">This proposal is discussed under Strand 3 below.</span></p>
<p class="MsoNormal"><b><span style="font-family: georgia;">Strand 2 - Legal but harmful <o:p></o:p></span></b></p>
<p class="MsoNormal"><span style="font-family: georgia;">The most heavily debated aspect of the government’s
proposals has been the ‘legal but harmful content’ duty. In the draft Bill this
comes in two versions: a substantive duty to mitigate user content harmful to
children; and a transparency duty in relation to user content harmful to
adults. That, at any rate, appears to be the government’s political intention.
As drafted, the Bill could be read as going further and imposing a substantive ‘content
harmful to adults’ duty (something that at least some of the Committees want the
legislation explicitly to do). <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Compared with an illegality duty, the legal but harmful duty
is conceptually closer to a duty of care properly so called. As a species of
duty to take care to avoid harm to others, it at least inhabits approximately the
same universe. However, the similarity stops there. It is a duty of care
detached from its moorings (risk of objectively ascertainable physical injury)
and then extended into a duty to prevent other people harming each other. As
such, like the illegality duty, it has no comparable equivalent in the offline
world; and again, as with the illegality duty, any concept of risk-creating
activity by providers is stretched and homeopathically diluted to encompass mere
facilitation of individual public speech.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Those features make the legal but harmful duty a
categorically different kind of obligation from analogous offline duties of care; one
that – at least if framed as a substantive obligation - is difficult to render
compliant with a human rights framework, due to the inherently vague notions of
harm that inevitably come into play once harm is extended beyond risk of
objectively ascertainable physical injury.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">This problem has bedevilled the Online Harms proposals from
the start. The White Paper (Harm V.1) left harm undefined, which would have
empowered Ofcom to write an alternative statute book to govern online speech.
The Full Consultation Response (Harm V.2) defined harm as “reasonably
foreseeable risk of a significant adverse physical or psychological impact on
individuals”. The draft Bill (Harm V.3) spans the gamut, from undefined (for
priority harmful content) to physical or psychological harm (general
definition) to a complex cascade of definitions starting with the “adult (or
child) of ordinary sensibilities” for residual non-priority harmful content.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">If harm includes subjectively perceived harm, then it is
likely to embody a standard of the most easily offended reader and to require
platforms to make decisions based on impossibly vague criteria and
unascertainable factual context. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The debate has not been helped by a common tendency to refer to ‘risk’ in the abstract, without identifying what counts and, just
as importantly, does not count as harm. Everyday expressions such as ‘harm’,
‘abuse’, ‘trolling’ and so on may suffice for political debate. But legislation
has to grapple with the uncomfortable question of what kinds of lawful but
controversial and unpleasant speech should <i>not</i> qualify as harmful. That
is a question that a lawmaker cannot avoid if legislation is to pass the ‘clear
and precise’ rule of law test. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Even when a list is proposed it still tends to be pitched at
a level that can leave basic questions unanswered. The Joint Committee, for
instance, proposes a list including ‘abuse, harassment or stirring up of
violence or hatred based on the protected characteristics in the Equality Act
2010’, and “content or activity likely to cause harm amounting to significant
psychological distress to a likely audience (defined in line with the Law
Commission offence)”. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">On that basis does blasphemy count as legal but harmful
content? Does the Committee’s proposed list of specific harms answer that
question? Some would certainly claim to suffer significant psychological distress
from reading blasphemous material. Religion
or belief is a protected characteristic under the Equality Act. How would that be
reconciled with the countervailing duty to take into account the importance of
freedom of expression within the law or, as the Joint Committee would propose
for high risk platforms, to assess the public interest in high value speech
under the guidance of Ofcom?<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">If none of these provides a clear answer, the result is to
delegate the decision-making to Ofcom. That prompts the question whether such a
controversial decision as to what speech is or is not permissible online should
be made, and made in clear terms, by Parliament. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">While on the topic of delegation, let us address the proposition that the draft Bill’s ‘legal but harmful to adults’
duty delegates a state power to platforms. The Joint Committee report has an entire
section entitled ‘Delegation of decision making’ ([165] to [169]). <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">At present, service providers have freedom to decide what legal
content to allow or not on their platforms, and to make their own rules
accordingly. That does not involve any delegation of state power, any more than
Conway Hall exercises delegated state power when it decides on its venue hiring
policy. Unless and until the state chooses to take a power via legislation, there
is no state power capable of delegation. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Clause 11 (if we take at face value what the government says
it is) requires platforms to provide users with information about certain of their
decisions, and to enforce their rules consistently. Again, the state has not taken
any power (either direct or via Ofcom) to instruct providers what rules to make.
No state power, no delegation. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">It is only when (as at least some Committees propose) the
state takes a power to direct or govern decision-making that delegation is involved. Such
a power would be delegated to Ofcom. Providers are then obligated to enforce the
Bill’s and Ofcom’s rules against users. That involves providers in making decisions
about what content contravenes the rules. There is still no delegation of rule-making,
except to the extent that latitude, vagueness or ambiguity in those rules
results in de facto delegation of rule-making to the providers.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;"><b>Current State of Play</b> None of the Committees has
accepted the submissions from a number of advocacy groups (and the previous
<a href="https://committees.parliament.uk/publications/6878/documents/72529/default/" target="_blank">Lords Committee Report</a> on Freedom of Expression in the Digital Age) that ‘legal
but harmful to adults’ obligations should be dropped from the legislation. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">However, each Committee has put forward its own alternative
formulation:</span></p><p class="MsoNormal"></p><ul style="text-align: left;"><li><span style="text-indent: -18pt;"><span style="font-family: georgia;">The Joint Committee’s list of reasonably
foreseeable risks of harm that providers should be required to identify and
mitigate (replacing the draft Bill’s transparency duty with a substantive
mitigation duty) ([176]), as part of an overall package of recommended changes</span></span></li><li><span style="font-family: georgia;">The Petitions Committee’s recommendation that
the primary legislation should contain as comprehensive an indication as
possible of what content would be considered harmful to adults or children; and
that abuse based on characteristics protected under the Equality Act and hate
crime legislation should be designated as priority harmful content in the
primary legislation. This Committee also considers that the legal but harmful
duty should be a substantive mitigation duty. ([46], [67])</span></li><li><span style="font-family: georgia;">The DCMS Committee’s recommendation (similar to
the Joint Committee) that the definition of (legal) content that is harmful to
adults should be reframed to apply to reasonably foreseeable harms identified
in risk assessments ([20]); This sits alongside a proposal that providers be
positively required to balance their safety duties with freedom of expression
([19]); and that providers should be required to assess and take into account
context, the position of the speaker, the susceptibility of the audience and
the content’s accuracy. ([20]) This Committee appears also, at least
implicitly, to support conversion into a substantive duty.</span></li></ul><p></p>
<p class="MsoNormal"><span style="font-family: georgia;">The DCMS Committee also recommends that the definition of
legal content harmful to adults should: “explicitly include content that
undermines, or risks undermining, the rights or reputation of others, national
security, public order and public health or morals, as also established in
international human rights law.”<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">On the face of it this is a strange proposal. The listed
items are aims in pursuance of which, according to international human rights
law, a state may if it so wishes restrict freedom of expression - subject to
the restriction being prescribed by law (i.e. by clear and certain rules),
necessary for the achievement of that aim, and proportionate. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The listed aims do not themselves form a set of clear and
precise substantive rules, and are not converted into such by the device of adding
‘undermines, or risks undermining’. The result is a unfeasibly vague
formulation. Moreover, it appears to suggest that every kind of speech that can
legitimately be restricted under international human rights law, should
be. It is difficult to believe that the
Committee really intends that. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The various Committee proposals illustrate how firmly the draft
Bill is trapped between the twin devils of over-removal via the blunt
instrument of a content-oriented safety duty; and of loading onto
intermediaries the obligation to make ever finer and more complex
multi-factorial judgements about content. The third propounded alternative of <a href="https://www.cyberleagle.com/2021/11/the-draft-online-safety-bill-systemic.html" target="_blank">safety by design</a> has
its own vice of potentially interfering with all content, good and bad alike.</span></p>
<p class="MsoNormal"><span style="font-family: georgia;"><b>Strand 3 - Reduce the discretion of large social media
platforms to decide what content should and should not be on their services</b><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Until very late in the consultation process the focus of the
government’s Online Harms proposals was entirely on imposing duties on
providers to prevent harm by their users, with the consequent potential for
over-removal of user content mitigated to some degree by a duty to have regard
to the importance of freedom of expression within the law. This kind of
proposal sought to leverage the abilities of platforms to act against user
content.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">When the Full Response was published a new strand was evident: seeking to rein in the ability of large platforms to decide
what content should and should not be present on their services. It is possible that this may have been prompted by events such as suspension of then President Trump’s
Twitter account. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Be that as it may, the Full Response and now the draft Bill
include provisions, applicable to Category 1 U2U providers, conferring special
protections on journalistic content and content of democratic importance. The most far-reaching protections relate to
content of democratic importance. For such content the provider must not only
ensure that it has systems and processes designed to ensure the importance of
free expression of such content when making certain decisions (such as
takedown, restriction or action against a user), but ensure that they apply in
the same way to a diversity of political opinion. Whatever the merits and
demerits of such proposals, they are far removed from the original policy goal of
ensuring user safety.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;"><b>Current state of play</b> As noted above, the Joint
Committee proposes that the journalistic content and content of democratic
importance be replaced by a single statutory requirement to have proportionate
systems and processes to protect ‘content where there are reasonable grounds to
believe it will be in the public interest’ ([307]) The DCMS Committee
recommendation on the scope of legal but harmful content recommends including
democratic importance and journalistic nature when considering the context of
content ([23]). <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Although the Committee’s discussion is about protecting
‘high value speech’, there is a risk involved in generalising this protection to the kind of single statutory safeguard for ‘content in the public interest’ envisaged
by the Committee. The risk is that in practice the safeguard would be turned on its head – with
the result that only limited categories of ‘high value speech’ would be seen as
presumptively qualifying for protection from interference, leaving ‘low value’
speech to justify itself and in reality shorn of protection. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">That is the error that Warby L.J. identified in <i>Scottow</i>,
a prosecution under S.127 Communications Act 2003:<o:p></o:p></span></p>
<p class="MsoNormal" style="margin-left: 36pt;"><span style="font-family: georgia;">“The Crown evidently did not
appreciate the need to justify the prosecution, but saw it as the defendant's
task to press the free speech argument. The prosecution argument failed
entirely to acknowledge the well-established proposition that free speech
encompasses the right to offend, and indeed to abuse another. The Judge appears
to have considered that a criminal conviction was merited for acts of
unkindness, and calling others names, and that such acts could only be
justified if they made a contribution to a "proper debate". … It is not the law that individuals are only
allowed to make personal remarks about others online if they do so as part of a
"proper debate". <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">In the political arena, the presumption that anything
unpleasant or offensive is prima facie to be condemned can be a powerful
one. The 10 December 2021 House of Lords debate on freedom of speech was packed with pleas to be nicer to each other online: hard to disagree with
as a matter of etiquette. But if being unpleasant is thought of itself to
create a presumption against freedom of expression, that does not reflect human
rights law. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The risk of de facto reversal of the presumption in favour
of protection of speech when we focus on protecting ‘high value’ speech is all
the greater where platforms are expected to act in pursuance of their safety
duty proactively, in near real-time and at scale, against a duty-triggering threshold
of reasonable grounds to believe. </span></p><p class="MsoNormal"><span style="font-family: georgia;">That is without even considering the daunting
prospect of an AI algorithm that claims to be capable of assessing the public
interest.</span></p>
<p class="MsoNormal"><span style="font-family: georgia;"><b>Strand 4. Create new and reformed criminal offences that
would apply directly to users</b><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">In parallel with the government’s proposals for an online duty
of care, the Law Commission has been conducting two projects looking at the
criminal law as it affects online and other communications: <a href="https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1041160/Modernising-Communications-Offences-2021-Law-Com-No-399.pdf">Modernising Communications Offences </a>(Law Com No 399, 21 July 2021); <a href="https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1041169/Hate-crime-report-accessible.pdf" target="_blank">Hate Crime Laws (LawCom No 402,</a> 7 December 2021). <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The communications offences report recommended:</span></p><p class="MsoNormal"></p><ul style="text-align: left;"><li><span style="text-indent: -18pt;"><span style="font-family: georgia;"><a href="https://www.cyberleagle.com/2021/11/licence-to-chill.html" target="_blank">A new harm-based communications offence</a> to
replace S.127(1) Communications Act 2003 and the Malicious Communications Act
1988</span></span></li><li><span style="font-family: georgia;">A new offence of encouraging or assisting
serious self-harm</span></li><li><span style="font-family: georgia;">A new offence of cyberflashing; and</span></li><li><span style="font-family: georgia;">New offences of sending knowingly false,
persistent or threatening communications, to replace S.127(2) Communications
Act 2003</span></li></ul><p></p>
<p class="MsoNormal"><span style="font-family: georgia;">It also recommended that the government consider legislating
to criminalise maliciously sending flashing images to known sufferers of
epilepsy. It was not persuaded that specific offences of pile-on harassment or
glorification of violent crime would be necessary, effective or desirable.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The hate crime report made a complex series of
recommendations, including extending the existing ‘stirring up’ offences to
cover hatred on grounds of sex or gender. It recommended that if the draft
Online Safety Bill becomes law, inflammatory hate material should be included
as ‘priority illegal content’ and the stirring up offences should not apply to
social media companies and other platforms in respect of user to user content
unless intent to stir up hatred on the part of the provider could be proved.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">It also recommended that the government undertake a review
of the need for a specific offence of public sexual harassment (covering both
online and offline).<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The government has said in an <a href="https://www.gov.uk/government/publications/government-interim-response-to-the-law-commissions-review-modernising-communications-offences/government-interim-response-to-the-law-commissions-modernising-communications-offences-report">interim response to the communications offences report</a> that it proposes to include three of the
recommended offences in the Bill: the harm-based communications offence, the false
communications offence and the threatening communications offence. The
remainder are under consideration. The hate crime report awaits an interim
response.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">From the point of view of the safety duties under the Online
Safety Bill, the key consequence of new offences is that the dividing line
between the illegality duty and the ‘legal but harmful’ duties would shift.
However, the ‘reasonable grounds to believe threshold would not change, and
would apply to the new offences as it does to existing offences. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The Petitions Committee acknowledged concerns over how the
proposed harm-based offence would intersect with the illegality duties:<o:p></o:p></span></p>
<p class="MsoNormal" style="margin-left: 36pt;"><span style="font-family: georgia;">“The Law Commission is right to
recommend refocusing online communications offences onto the harm abusive messages can
cause to victims. We welcome the Government’s commitment to adopt the proposed
threatening and ‘harm-based’ communications offences. However, we also
acknowledge the uncertainty and hesitation of some witnesses about how the new
harm-based offence will be interpreted in practice, including the role of
social media companies and other online platforms in identifying this
content—as well as other witnesses’ desire for the law to deal with more cases
of online abuse more strongly.” <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">It recommended monitoring the effectiveness of the offences
and that the government should publish an initial review of the workings and
impact of any new communications offences within the first two years after they
come into force.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The Joint Committee supported the Law Commission
recommendations. It also suggested that concerns about ambiguity and the
context-dependent nature of the proposed harm-based offence could be addressed
through the statutory public interest requirement discussed above. [135]<o:p></o:p></span></p>
<p class="MsoNormal"><b><span style="font-family: georgia;">Annex<o:p></o:p></span></b></p>
<p class="MsoNormal"><b><span style="font-family: georgia;">What is a duty of care?<o:p></o:p></span></b></p>
<p class="MsoNormal"><span style="font-family: georgia;">In its proper legal sense a duty of care is a duty to take
reasonable care to avoid injuring other people– that is why it is called a duty
<i>of care</i>. It is not a duty to prevent other people breaking the law. Nor
(other than exceptionally) is it a duty to prevent other people injuring each
other. Still less is it a duty to prevent other people speaking harshly to each
other. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">A duty of care exists in the common law of negligence and
occupier’s liability. Analogous duties exist in regulatory contexts such as
health and safety law. A duty of care
does not, however, mean that everyone owes a duty to avoid causing any kind of
harm to anyone else in any situation. Quite the reverse. The scope of a duty of
care is limited by factors such as kinds of injury, causation, foreseeability
and others. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">In particular, for arms-length relationships such as
property owner and visitor (the closest analogy to platform and user) the law
carefully restricts safety-related duties of care to objectively ascertainable
kinds of harm: physical injury and damage to property. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;"><b>Objective injury v subjective harm</b> Once we move into subjective speech harms the
law is loath to impose a duty. The UK Supreme Court held in <i>Rhodes</i> that
the author of a book owes no duty to avoid causing distress to a potential
reader of the book. It said:<o:p></o:p></span></p>
<p class="MsoNormal" style="margin-left: 36pt;"><span style="font-family: georgia;">“It is difficult to envisage any
circumstances in which speech which is not deceptive, threatening or possibly
abusive, could give rise to liability in tort for wilful infringement of
another’s right to personal safety. The right to report the truth is
justification in itself. That is not to say that the right of disclosure is
absolute … . But there is no general law prohibiting the publication of facts
which will cause distress to another, even if that is the person’s intention.”
[77]<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">That is the case whether the author sells one book or a
million, and whether the book languishes in obscurity or is advertised on the
side of every bus and taxi.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The source of some of the draft Bill’s most serious problems
lies in the attempt to wrench the concept of a safety-related duty
of care out of its offline context – risk of physical injury - and apply it to
the contested, subjectively perceived claims of harm that abound in the context
of speech. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">In short, <a href="https://www.cyberleagle.com/2019/06/speech-is-not-tripping-hazard-response.html" target="_blank">speech is not a tripping hazard</a>. Treating it as
such propels us ultimately into the territory of claiming that speech is
violence: a proposition that reduces freedom of expression to a self-cancelling
right. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Speech is protected as a fundamental right. Some would say
it is the right that underpins all other rights. It is precisely because speech
is <i>not</i> violence that Berkeley students enjoy the right to <a href="https://twitter.com/SophiaLeeHyun/status/908506014038630400">display placards proclaiming that speech is violent</a>. The state is – or should be -
powerless to prevent them, however wrong-headed their message. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Quite how, on the nature of speech, a Conservative
government has ended up standing shoulder to shoulder with those Berkeley
students is one of the ineffable mysteries of politics. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;"><b>Causing v preventing</b> Even where someone is under a
duty to avoid causing physical injury to others, that does not generally
include a duty to prevent them from injuring each other. Exceptionally, such a
preventative duty can (but does not necessarily) arise, for instance where the
occupier of property does something that creates a risk of that happening.
Serving alcohol on the premises, or using property for a public golf course,
would be an example. Absent that, or a legally close relationship (such as
teacher-pupil) or an assumption of responsibility, there is no duty. Even less
would any preventative duty exist for what visitors say to each other on the
property.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The duty proposed to be imposed on UGC platforms is thus
doubly removed from offline duties of care. First, it would extend far beyond
physical injury into subjective harms. Second, the duty consists in the
platform being required to prevent or restrict how users behave to each other. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">It might be argued that some activities (around algorithms,
perhaps) are liable to create risks that, by analogy with offline, could
justify imposing a preventative duty. That at least would frame the debate around
familiar principles, even if the kind of harm involved remained beyond bounds. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Had the online harms debate been conducted in those terms,
the logical conclusion would be that platforms that do not do anything to
create relevant risks should be excluded from scope. But that is not how it has
proceeded. True, much of the political rhetoric has focused on Big Tech and Evil
Algorithm. But the draft Bill goes much further than that. It assumes that
merely facilitating individual public speech by providing an online platform,
however basic that might be, is an inherently risk-creating activity that
justifies imposition of a duty of care. That proposition upends the basis on
which speech is protected as a fundamental right. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;"><b>Safety by design </b>It may be suggested that by
designing in platform safety features from the start it is possible to reduce
or eliminate risk, while avoiding the problems of detecting, identifying and
moderating particular kinds of illegal or harmful content. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">It is true that some kinds of safety feature – a reporting
button, for instance – do not entail any
kind of content moderation. However, risk is not a self-contained concept. We
always have to ask: “risk of what?” If the answer is “risk of people encountering
illegal or harmful content”, at first sight that takes the platform back
towards trying to distinguish permissible from impermissible content. However,
that is not necessarily so.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">A typical example of safety by design concerns
amplification. It is suggested that platforms should be required to design in
‘friction’ features that inhibit sharing and re-sharing of content, especially
at scale. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The problem with a content-agnostic approach such as this is
that it inevitably strikes at all content alike (although it would no doubt be
argued the overall impact of de-amplification is skewed towards ‘bad’ content
since that is more likely to be shared and re-shared). <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">However, the content-agnostic position is rarely maintained
rigorously, often reverting to discussion of ways of preventing amplification
of illegal or harmful content (which takes us back to identifying and
moderating such content). An example of this can be seen in Joint Committee
recommendation 82(e):<o:p></o:p></span></p>
<p class="MsoNormal" style="margin-left: 36pt;"><span style="font-family: georgia;">“Risks created by virality and
the frictionless sharing of content at scale, mitigated by measures to create
friction, slow down sharing whilst viral content is moderated, require active
moderation in groups over a certain size…”<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Criticism of amplification is encapsulated in the slogan
‘freedom of speech is not freedom of reach’. As a matter of human rights law,
however, interference with the reach of communications certainly engages the
right of freedom of expression. As the Indian Supreme Court held in January
2020:<o:p></o:p></span></p>
<p class="MsoNormal" style="margin-left: 36pt;"><span style="font-family: georgia;">“There is no dispute that freedom
of speech and expression includes the right to disseminate information to as
wide a section of the population as is possible. The wider range of circulation
of information or its greater impact cannot restrict the content of the right
nor can it justify its denial."<o:p></o:p></span></p>
<span style="font-family: georgia;"><b><span style="line-height: 115%;">Broadcast regulation</span></b><span style="line-height: 115%;"> The model adopted by the
draft Bill is discretionary regulation by regulator, rather than regulation by
the general law. Whether discretionary broadcast-style regulation is an
appropriate model for individual speech is a debate in its own right.</span></span><div><span style="font-family: georgia;"><br /></span></div><div><span style="font-family: georgia;">[Grammatical correction 19 Feb 2022]<br /></span><div><span style="font-family: georgia;"><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/a/AVvXsEhSdeC7tIgma00YhQvg2tKHV98FwzXMWW-4UgLoXQXvlWaP7n7MwafhTFNQEpq3zz_oxtjqpDI64T6wZ7FvpuBOT1_BgUyGHrDMVAwaLGxfFSdMveCqItLvRZplHDTaebg436kz0mE5xiBIp9Qwpf2tvJnTtlhf9L0MGepLGkOzXV4L3MP7nQJcIFH-vQ=s135" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="134" data-original-width="135" height="134" src="https://blogger.googleusercontent.com/img/a/AVvXsEhSdeC7tIgma00YhQvg2tKHV98FwzXMWW-4UgLoXQXvlWaP7n7MwafhTFNQEpq3zz_oxtjqpDI64T6wZ7FvpuBOT1_BgUyGHrDMVAwaLGxfFSdMveCqItLvRZplHDTaebg436kz0mE5xiBIp9Qwpf2tvJnTtlhf9L0MGepLGkOzXV4L3MP7nQJcIFH-vQ" width="135" /></a></div><br /><span style="font-size: 11pt; line-height: 115%;"><br /></span></span></div></div>Cyberleaglehttp://www.blogger.com/profile/17507190182464072147noreply@blogger.com0tag:blogger.com,1999:blog-229721367671779922.post-69174218717309379422022-01-31T12:38:00.003+00:002022-11-02T21:20:09.561+00:00Internet legal developments to look out for in 2022<p><span style="font-family: georgia;">Another instalment of my annual round-up of what is on the
horizon for UK internet law <span style="color: red;">[Updated 29 April and 2 November 2022]</span>. It does stray a little beyond our shores, noting some significant
EU developments (pre-Brexit habits die hard). As always, it does not include
data protection (too big, not really my field).</span></p><p class="MsoNormal"><span style="font-family: georgia;"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;"><b><strike>Draft</strike> Online Safety Bill</b> The UK government published
its <a href="https://www.gov.uk/government/publications/draft-online-safety-bill">draft
Online Safety Bill</a> in May 2021. The Parliamentary Joint Pre-Legislative
Scrutiny Committee published its <a href="https://committees.parliament.uk/committee/534/draft-online-safety-bill-joint-committee/news/159784/no-longer-the-land-of-the-lawless-joint-committee-reports/">report
on the draft Bill</a> on 14 December 2021. A sub-committee of the Commons DCMS
Select Committee also published a <a href="https://committees.parliament.uk/work/1432/online-safety-and-online-harms/publications/">report</a>
on 24 January 2022, as did the Lords Communications and Digital Committee
Inquiry on <a href="https://committees.parliament.uk/work/745/freedom-of-expression-online/publications/">Freedom
of Expression Online</a> on 22 July 2021. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The government <strike>is expected to </strike>introduce<span style="color: red;">d</span> <a href="https://bills.parliament.uk/bills/3137" target="_blank">a Bill</a> in<span style="color: red;">to</span> Parliament
<strike>by</strike> <span style="color: red;">on 17 </span>March 2022. <span style="color: red;">The Bill had its Second Reading on 19 April 2022. Its Report Stage is paused, likely to be recommenced this month. </span>Among many things for which the <strike>draft </strike>legislation is notable, its
abandonment of the <a href="https://www.cyberleagle.com/2017/05/time-to-speak-up-for-article-15.html">ECD
Article 15</a> prohibition on general monitoring obligations stands out.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;"><b>EU Digital Services Act</b> The European Commission
published its <a href="https://ec.europa.eu/digital-single-market/en/digital-services-act-package">proposals</a> for
a Digital Services Act and a Digital Markets Act on 15 December 2020. The
proposed Digital Services Act includes replacements for Articles 12 to 15 of
the ECommerce Directive. Following a <a href="https://www.europarl.europa.eu/news/en/headlines/society/20211209STO19124/eu-digital-markets-act-and-digital-services-act-explained">vote
in the European Parliament on 20 January 2022</a>, the proposed legislation
<strike>will now </strike>enter<span style="color: red;">ed</span> the trilogue stage. <span style="color: red;"><a href="https://digital-strategy.ec.europa.eu/en/news/digital-services-act-commission-welcomes-political-agreement-rules-ensuring-safe-and-accountable" target="_blank">Political agreement</a> was reached on 23 April 2022. The final text was <a href="https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=uriserv%3AOJ.L_.2022.277.01.0001.01.ENG&toc=OJ%3AL%3A2022%3A277%3ATOC">published in the Official Journal</a> on 27 October 2022.</span></span></p><p class="MsoNormal"><span style="font-family: georgia;"><b>Terrorist content</b> The EU <a href="https://eur-lex.europa.eu/legal-content/en/TXT/?uri=CELEX%3A32021R0784">Regulation
on addressing the dissemination of terrorist content online</a> will come into
effect on 7 June 2022.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;"><b>Erosion of intermediary liability shields by omission</b>
One by-product of Brexit is that the UK is no longer bound to implement the <a href="https://www.cyberleagle.com/2018/04/the-electronic-commerce-directive.html" target="_blank">conduit, caching and hosting shields</a> provided by the EU eCommerce Directive. <a href="https://www.gov.uk/guidance/the-ecommerce-directive-and-the-uk" target="_blank">The government says</a> that it “is committed to upholding the liability protections
now that the transition period has ended”. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">However, implementation of that policy requires every new
piece of legislation that could impose liability on an intermediary explicitly
to include the protections. If that is not done, then, owing to the fact that
the original Electronic Commerce Directive Regulations 2002 <a href="https://www.cyberleagle.com/2021/02/corrosion-proofing-uks-intermediary.html">do
not have prospective effect</a>, the protections will not apply to that new
source of liability. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Two examples are already progressing though Parliament: the statutory
codification of the public nuisance offence in the Policing Bill <span style="color: red;">(which, following Royal Assent, <a href="https://www.legislation.gov.uk/ukpga/2022/32/section/78" target="_blank">came into force</a> on 26 June 2022)</span>, and the <a href="https://www.legislation.gov.uk/ukpga/2022/37/section/48" target="_blank">electronic election imprints offences</a> in the Elections Bill <span style="color: red;">(Royal Assent 28 April 2022, not yet in force)</span>, neither of which includes the
conduit, caching and hosting shields. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Such omissions have been known in the past, and were cured
by statutory instrument under the European Communities Act 1972. That option is
no longer available. As time goes on, accretion of such omissions in new
legislation will gradually erode the intermediary protections to which the
government is committed. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;"><b>Law Commission Reports</b> The Law Commission has issued
two Reports making recommendations that are relevant to online speech. The
first is its <a href="https://www.lawcom.gov.uk/project/reform-of-the-communications-offences/">Report
on Reform of the Communications Offences</a> (notably, recommending replacing
S.127 Communications Act 2003 and the
Malicious Communications Act 1988 with a <a href="https://www.cyberleagle.com/2021/11/licence-to-chill.html">new harm-based
offence</a>). The second report is on <a href="https://www.lawcom.gov.uk/project/hate-crime/">Hate Crime Laws</a>. The
recommendations on communications offences,<strike> at least, are being considered for
incorporation</strike> <span style="color: red;"><a href="https://www.cyberleagle.com/2022/03/mapping-online-safety-bill.html" target="_blank">have been included</a> </span>in the Online Safety Bill.</span></p><p class="MsoNormal"><span style="font-family: georgia;">
<b>Copyright</b> The Polish government’s challenge to Article 17 (<a href="http://curia.europa.eu/juris/document/document.jsf?text=&docid=216823&pageIndex=0&doclang=en&mode=lst&dir=&occ=first&part=1&cid=7919548"><i>Poland
v Parliament and Council</i>, Case C-401/19</a>) <strike>is pending</strike> <span style="color: red;"><a href="https://curia.europa.eu/juris/document/document.jsf?text=&docid=258261&pageIndex=0&doclang=EN&mode=lst&dir=&occ=first&part=1&cid=1073642" target="_blank">was decided</a> on 26 April 2022</span>. Poland argue<span style="color: red;">d</span> that
Article 17 makes it necessary for OSSPs, in order to avoid liability, to carry
out prior automatic filtering of content uploaded online by users, and
therefore to introduce preventive control mechanisms. It contend<span style="color: red;">ed</span> that such
mechanisms undermine the essence of the right to freedom of expression and
information and do not comply with the requirement that limitations imposed on
that right be proportionate and necessary. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The <a href="https://curia.europa.eu/juris/document/document.jsf;jsessionid=C16F26D3FE68CABE56981CEA4DE5821B?text=&docid=244201&pageIndex=0&doclang=EN&mode=lst&dir=&occ=first&part=1&cid=783038">Advocate-General’s
Opinion</a> was delivered on 15 July 2021. It <span style="color: red;">wa</span>s something of an Opinion of
Solomon: recommending that the challenge be rejected, but only on the basis that
the Directive is implemented in a way that minimises false positives. The
Advocate General also, in a postscript, challenged aspects of the Article 17 guidance
issued by the Commission subsequent to the drafting of the Opinion. <span style="color: red;">The <a href="https://curia.europa.eu/juris/document/document.jsf?text=&docid=258261&pageIndex=0&doclang=EN&mode=lst&dir=&occ=first&part=1&cid=1073642" target="_blank">judgment</a> largely followed the Opinion, dismissing the challenge but on the basis of an interpretation of Article 17 that included strict safeguards against removal of lawful content.</span><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;"><b>Policing Bill </b>The <a href="https://bills.parliament.uk/bills/2839">Police, Crime, Sentencing and
Courts Bill</a> has ignited significant controversy over its impact on street
protests, including through its statutory codification of the common law
offence of public nuisance. The potential application of the new statutory offence
to <a href="https://www.cyberleagle.com/2021/04/seriously-annoying-tweets.html">online
speech</a>, however, has gone virtually unnoticed. <b><o:p></o:p></b></span></p>
<p class="MsoNormal"><span style="font-family: georgia;"><b>Product Security and Telecommunications Infrastructure
Bill </b>An honourable mention for this Bill: a framework for imposing all
kinds of security requirements on (among other things) internet-connectable products.<b><o:p></o:p></b></span></p>
<p class="MsoNormal"><span style="font-family: georgia;"><b>Back from the dead? The Digital Economy Act 2017 </b>The
non-commencement of the age verification provisions of the Digital Economy Act
2017 has long been a source of controversy. In November 2021 the High Court <a href="https://www.crowdjustice.com/case/protect-children-from-harmful-pornography/">gave
permission</a> to two members of the public to commence judicial review proceedings. <span style="color: red;">This may now in practice have been overtaken by the inclusion of pornography sites in the <a href="https://bills.parliament.uk/bills/3137/publications">Online Safety Bill</a>.</span><br />
<!--[endif]--><b><o:p></o:p></b></span></p>
<p class="MsoNormal"><span style="font-family: georgia;"><b>Cross-border data access </b><span lang="EN-US">The US and the UK signed a </span><a href="https://www.gov.uk/government/news/uk-and-us-sign-landmark-data-access-agreement"><span lang="EN-US">Data Access Agreement</span></a><span lang="EN-US"> on 3 October 2019, providing
domestic law comfort zones for service providers to respond to data access demands
from authorities located in the other country. <strike>No announcement has yet been
made that Agreement has entered into operation. </strike><span style="color: red;">It <a href="https://www.justice.gov/opa/pr/landmark-us-uk-data-access-agreement-enters-force" target="_blank">came into force</a> on 3 October 2022.</span></span><b><o:p></o:p></b></span></p>
<p class="MsoNormal"><span style="font-family: georgia;"><span lang="EN-US">The <a href="https://search.coe.int/cm/pages/result_details.aspx?objectid=0900001680a48e4d">Second
Additional Protocol</a> to the Convention on Cybercrime on enhanced
co-operation and disclosure of electronic evidence is <a href="https://www.coe.int/en/web/cybercrime/-/second-additional-protocol-to-the-cybercrime-convention-adopted-by-the-committee-of-ministers-of-the-council-of-europe">was open for signature from 12 May 2022</a> <span style="color: red;">and <a href="https://www.gov.uk/government/publications/second-additional-protocol-to-the-convention-on-cybercrime-on-enhanced-co-operation-and-disclosure-of-electronic-evidence-ms-no92022" target="_blank">presented to the UK Parliament</a> in July 2022</span>. </span><br />
<span lang="EN-US"><br />
<b>State communications surveillance </b>The kaleidoscopic mosaic of cases
capable of affecting the UK’s </span><a href="http://www.legislation.gov.uk/ukpga/2016/25/contents/enacted"><span lang="EN-US">Investigatory Powers Act 2016</span></a><span lang="EN-US"> (IP Act) continues to reshape
itself. In this field CJEU judgments will continue to be relevant in principle, since they
form the backdrop to future reviews of the European Commission’s June 2021 UK </span><a href="https://www.gov.uk/government/news/eu-adopts-adequacy-decisions-allowing-data-to-continue-flowing-freely-to-the-uk"><span lang="EN-US">data protection adequacy</span></a><span lang="EN-US"> decision. </span><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;"><span lang="EN-US">Domestically,
Liberty has a pending judicial review of the IP Act bulk powers and data
retention powers. Some EU law aspects (including bulk powers) were stayed
pending the <i>Privacy International</i> reference to the CJEU. <strike>Those
aspects are now proceeding and, according to Liberty, are likely to be in court
in early 2022.</strike> The Divisional Court </span><a href="https://www.bailii.org/ew/cases/EWHC/Admin/2018/975.html" target="_blank"><span lang="EN-US">rejected</span></a><span lang="EN-US"> the claim that the IP Act data retention
powers provide for the general and indiscriminate retention of traffic and
location data, contrary to EU law. That point may in due course come before the
Court of Appeal. <span style="color: red;"><a href="https://www.bailii.org/ew/cases/EWHC/Admin/2022/1630.html" target="_blank">The Divisional Court gave judgment</a> on the stayed aspects on 24 June 2022. Liberty's claims were rejected except for one aspect concerning the need for prior independent authorisation for access to some retained data.</span> <o:p></o:p></span></span></p>
<p class="MsoNormal"><span style="font-family: georgia;"><b><span lang="EN-US">Investigatory
Powers Act review</span></b><span lang="EN-US"> The
second half of 2022 will see the Secretary of State preparing the report on the
operation of the IP Act required under Section 260 of the Act.<o:p></o:p></span></span></p>
<p class="MsoNormal"><span style="font-family: georgia;"><b>Electronic transactions</b> The pandemic focused
attention on legal obstacles to transacting electronically and remotely. Whilst
uncommon in commercial transactions, some impediments do exist and, in a few
cases, were temporarily relaxed. That may pave the way for permanent changes in
due course.<o:p></o:p></span></p>
<span style="line-height: 115%;"><span style="font-family: georgia;">Although the question
typically asked is whether electronic signatures can be used, the most
significant obstacles tend to be presented by surrounding formalities rather
than signature requirements themselves. A case in point is the physical
presence requirement for witnessing deeds, which stands in the way of remote
witnessing by video or screen-sharing. The Law Commission Report on Electronic
Execution of Documents recommended that the government should set up an
Industry Working Group to look at that and other issues. The Working Group has <a href="https://www.gov.uk/government/news/new-expert-group-to-increase-confidence-and-standards-in-e-signatures">now
been formed</a>. <span style="color: red;">It issued an <a href="https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1051451/electronic-execution-documents-industry-working-group-interim-report.pdf" target="_blank">Interim Report</a> on 1 February 2022.</span></span></span><div><span style="color: red; font-family: georgia;"><br /></span></div><div><span style="color: red; font-family: georgia;">[Updated 29 April 2022 and 2 November 2022.]<br /></span><div><span style="font-family: georgia;"><br /></span><div><span style="line-height: 115%;"><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/a/AVvXsEhdxzwMCT_G3EkRrRFddHJEmgFyuPWS6kvYpXxSMA8ctttBrh3cBAaI4Xyq933vXE6oYz2IIN6d3hxz5Qj_Px7KmAedjWlAH7BrVAB-MsccdU--TwOHm4F8Yvh4H-N_Jsi0ZDwBPq861TJzukpuc_Nl8pPZPZWJzTPhoP49zC2GmCApmJxy30y2OqV6vA=s135" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="134" data-original-width="135" height="134" src="https://blogger.googleusercontent.com/img/a/AVvXsEhdxzwMCT_G3EkRrRFddHJEmgFyuPWS6kvYpXxSMA8ctttBrh3cBAaI4Xyq933vXE6oYz2IIN6d3hxz5Qj_Px7KmAedjWlAH7BrVAB-MsccdU--TwOHm4F8Yvh4H-N_Jsi0ZDwBPq861TJzukpuc_Nl8pPZPZWJzTPhoP49zC2GmCApmJxy30y2OqV6vA" width="135" /></a></div><br /><span style="font-family: georgia;"><br /></span></span></div></div></div>Cyberleaglehttp://www.blogger.com/profile/17507190182464072147noreply@blogger.com0tag:blogger.com,1999:blog-229721367671779922.post-55499848894141486482021-11-22T14:02:00.006+00:002022-07-11T21:26:23.322+01:00Licence to chill<p><span style="font-family: georgia;">To begin with, a confession. I should probably have paid
more attention to the Law Commission’s <a href="https://www.lawcom.gov.uk/project/reform-of-the-communications-offences/" target="_blank">project on reforming communications offences</a>. The Commission published its Final Report
in July 2021, recommending new offences to replace <a href="https://www.cyberleagle.com/2015/02/from-telegram-to-tweet-section-127-and.html">S.127 Communications Act 2003</a> and the Malicious Communications Act 1988. </span></p><p class="MsoNormal"><span style="font-family: georgia;"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Now that the government has indicated that it is minded to
accept the Law Commission’s recommendations, a closer – even if 11<sup>th</sup>
hour - look is called for: doubly so, since under the proposed Online Safety
Bill a service provider would be <a href="https://www.cyberleagle.com/2021/11/the-draft-online-safety-bill-concretised.html">obliged to take steps to remove user content</a>
if it has “reasonable grounds to believe” that the content is illegal. The two
provisions would thus work hand in glove. <span style="color: red;">[The Bill as introduced to Parliament omitted the "reasonable grounds to believe" threshold. <a href="https://www.cyberleagle.com/2022/03/mapping-online-safety-bill.html" target="_blank">It was silent</a> as to what standard a service provider should apply to adjudge illegality. "Reasonable grounds to infer" is now being introduced by a government amendment at Report Stage.] </span><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">There is no doubt that S.127, at any rate, is in need of reform. The question is whether the proposed replacement is an improvement. Unfortunately, that closer look suggests that the Law
Commission’s recommended harm-based offence has significant problems. These
arise in particular for a public post to a general audience. <o:p></o:p></span></p>
<p class="MsoNormal"><b><span style="font-family: georgia;">The proposed new offence<o:p></o:p></span></b></p>
<p class="MsoNormal"><span style="font-family: georgia;">The elements of the Law Commission’s proposed new offence
are:<o:p></o:p></span></p>
<p class="MsoNormal"><i><span style="font-family: georgia;">(1) the defendant sent or posted a communication that was
likely to cause harm to a likely audience;<o:p></o:p></span></i></p>
<p class="MsoNormal"><i><span style="font-family: georgia;">(2) in sending or posting the communication, the
defendant intended to cause harm to a likely audience; and<o:p></o:p></span></i></p>
<p class="MsoNormal"><i><span style="font-family: georgia;">(3) the defendant sent or posted the communication
without reasonable excuse.<o:p></o:p></span></i></p>
<p class="MsoNormal"><i><span style="font-family: georgia;">(4) For the purposes of this offence:<o:p></o:p></span></i></p>
<p class="MsoNormal" style="text-indent: 36pt;"><i><span style="font-family: georgia;">(a) a communication is a
letter, article, or electronic communication;<o:p></o:p></span></i></p>
<p class="MsoNormal" style="margin-left: 36pt;"><i><span style="font-family: georgia;">(b) a likely audience is
someone who, at the point at which the communication was sent or posted by the
defendant, was likely to see, hear, or otherwise encounter it; and<o:p></o:p></span></i></p>
<p class="MsoNormal" style="text-indent: 36pt;"><i><span style="font-family: georgia;">(c) harm is psychological
harm, amounting to at least serious distress.<o:p></o:p></span></i></p>
<p class="MsoNormal"><i><span style="font-family: georgia;">(5) When deciding whether the communication was likely to
cause harm to a likely audience, the court must have regard to the context in
which the communication was sent or posted, including the characteristics of a
likely audience.<o:p></o:p></span></i></p>
<p class="MsoNormal"><span style="font-family: georgia;"><i>(6) When deciding whether the defendant had a reasonable
excuse for sending or posting the communication, the court must have regard to
whether the communication was, or was meant as, a contribution to a matter of
public interest.</i><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The Law Commission goes on to recommend that “likely” be
defined as “a real or substantial risk”. This requires no further explanation for “likely
to cause harm”. For “a likely audience”, it would mean a real or substantial
risk of seeing, hearing, or otherwise encountering the communication. (Report [2.119])<o:p></o:p></span></p>
<p class="MsoNormal"><b><span style="font-family: georgia;">Psychological harm<o:p></o:p></span></b></p>
<p class="MsoNormal"><span style="font-family: georgia;">The challenge for any communications offence based on harm to a reader is
how to reconcile the need for an objective rule governing speech with the subjectivity
of how speech is perceived. The cost of getting it wrong is that we end up with
a variation on the heckler’s veto: speech chilled by fear of criminal liability
arising from the bare assertion of a claim to have suffered harm.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The focus on likelihood of ‘psychological harm’ as the
criterion for the recommended offence has provoked criticism on grounds
of subjectivity. It is notorious that protagonists in controversial areas of
debate may claim to be traumatised by views with which they are in deep
disagreement. The very kinds of speech that are meant to
have the greatest freedom of of expression protections – political and religious – are perhaps those in relation to which that kind of claim is most likely to be made.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The Law Commission would argue that the recommended
offence revolves around whether relevant harm is <i>likely</i> to be caused to someone <i>likely</i>
to encounter the communication in question (the ‘conduct element’ of the
offence). Harm has to be both likely and serious. A bare claim to have suffered
harm would therefore not of itself demonstrate that harm was likely or serious, since a complainant might be unforeseeably sensitive. Additionally, the prosecution would have
to show that the communication was made without reasonable excuse and that the
defendant intended to harm someone likely to encounter the communication.
<o:p></o:p></span></p>
<p class="MsoNormal"><b><span style="font-family: georgia;">Kinds of audience<o:p></o:p></span></b></p>
<p class="MsoNormal"><span style="font-family: georgia;">The Law Commission stresses that the offence would focus on
the factual context within which the communication took place. Thus, the
likelihood of a <i>private communication sent to one person</i> causing harm
would be adjudged most obviously according to the characteristics of the
intended recipient. If there was a real or substantial risk that the intended recipient
would suffer harm, then (whether or not the intended recipient actually
suffered harm) the conduct element would be made out. Further, if it was likely
(at the point of sending the communication) that someone other than the
intended recipient would also see the communication, then it would be relevant
to consider whether that other person would be likely to suffer harm from doing
so, taking into account their characteristics. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">A similar analysis would apply to a <i>group of readers</i>.
A post to a forum dedicated to disability issues would be likely to be read by
people with disabilities. That characteristic would be taken into
account, with the result that a likely audience would be likely to be caused
serious distress by a hate post about disabled people. The Law Commission
Consultation Paper applies that logic to the example of a tweet directed to a
well-known disability charity by means of the ‘@’ function. The likely audience
would primarily be the charity and its followers, many of whom could be assumed
to have a disability.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">How, though, should this analysis be applied to a <i>public
post to a general audience</i>? What would be the relevant characteristics of a
likely audience? How are those to be determined when no particular kind of
individual is especially likely to encounter the post?<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Does the general nature of the audience mean that the risk
of satisfying the conduct element is reduced, because no particular relevant
characteristics of an audience can be identified? Or is the risk increased, as the larger the audience the more likely it is to contain at least one person with
characteristics such that they are likely to suffer harm? Since the draft
offence refers to ‘someone’, one likely person appears to be sufficient to amount to a likely audience. The
Consultation Paper at [5.124], discussing ‘likely audience’ in the context of
the then proposed mental element of the offence, adopts that position.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The Law Commission Report does not fully address the question of the characteristics of a general audience. It responded to submissions raising concerns on the question of
public posts by rejecting suggestions that a “reasonable person” standard should
be applied, on the basis that sufficient protection was provided by the
requirement of intent to harm and the need to prove lack of reasonable excuse. <o:p></o:p></span></p>
<p class="MsoNormal"><b><span style="font-family: georgia;">Actual or hypothetical audience?<o:p></o:p></span></b></p>
<p class="MsoNormal"><span style="font-family: georgia;">The uncertainty about the position of public posts to a
general audience is exacerbated by lack of clarity over whether the conduct
element of the offence requires proof that someone likely to encounter the
communication actually did so (in which case the court’s analysis would
presumably tend to be focused on the characteristics of the person shown to have
encountered it, and the likelihood of their being harmed as a result); or
whether it would be sufficient to rely on the mere likelihood of someone
encountering it (in which case the court would appear to have to decide what characteristics to </span><span style="font-family: georgia;">attribute</span><span style="font-family: georgia;"> to </span><span style="font-family: georgia;">a hypothetical likely member of the audience).</span></p>
<p class="MsoNormal"><span style="font-family: georgia;">If the latter, then at least for a public post to a general audience the relevant factual context - a feature of the proposed offence on which the Law
Commission places considerable reliance - would seem, as regards the characteristics of the hypothetical person likely to suffer harm, to have to be constructed in the minds
of the judge or jury. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The Law Commission states that the proposed offence is
complete, both for likely harm and likely audience, at the point of sending the
communication (Rep 2.56, 2.91, 2.117). On that logic it should not matter if
no-one can be shown actually to have been harmed or actually to have encountered the communication. Proof of
likelihood should suffice for both. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The Law Commission also says (Rep 2.256)
that: <o:p></o:p></span></p>
<p class="MsoNormal" style="margin-left: 36pt;"><span style="font-family: georgia;">“where a communication was sent
or posted from a device to a social media platform, but was not made visible by
that platform (perhaps because of preventative algorithms), it could be
impossible for the offence to be made out because the prosecution would have to
prove that there was a likely audience who was at a real and substantial risk
of seeing the message. It might be that no one was at a real or substantial
risk of seeing the communication (i.e. the likely audience was nobody).”<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">If the offence is complete at the point of sending, and if
sending is the point at which the likely audience is to be determined, what would be the relevance of the post subsequently being blocked by the platform upon receipt? Does the likelihood of the post being blocked have to be considered? So could the offence still be committed if the post was unlikely to be blocked, but in fact was? Or, conversely, would the offence not be committed if the post was likely to be blocked, but slipped through? <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Such conundrums apart, the more </span><span style="font-family: georgia;">hypothetical the conduct element of the offence, the more significant is the Law Commission’s
rejection of a “reasonable person” when considering likelihood of harm. It leaves open the possibility that a notional
member of a likely audience could foreseeably be someone of unusual, or even extreme,
sensitivity.</span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Whether the likely audience member contemplated by the
offence is actual or notional, as already noted the Law Commission’s intention
appears to be that it would suffice if one person in the audience were likely
to encounter the communication and likely to suffer harm as a result. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The question of whether the actual presence of someone in the
audience has to be proved finds a
parallel in offences under the Public Order Act 1986. These differ as to
whether they require that a real person could have heard the relevant words, or
simply that a hypothetical person could have done so. Thus for S.5(1) Public
Order Act physical presence matters: were the words used “within the hearing or
sight of a person” likely to be caused harm? The presence of an actual person
likely to be caused harm has to be proved; but it does not have to be proved
that such person actually heard the words or suffered harm. If the person
present did hear them, the likelihood of their suffering relevant harm is
judged according to their relevant characteristics. Thus a police officer may
be regarded as possessing more fortitude than an ordinary member of the public.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">In contrast, the offences of riot, affray and violent
disorder under the Public Order Act are all expressly framed by reference to
the effect of the conduct on a notional person of reasonable firmness hypothetically
present at the scene; with no requirement that such a person be at, or be
likely to be at, the scene. <o:p></o:p></span></p>
<p class="MsoNormal"><b><span style="font-family: georgia;">Universal standards<o:p></o:p></span></b></p>
<p class="MsoNormal"><span style="font-family: georgia;">One of the main criticisms of the existing law is that the
supposedly objective categories of speech laid down (such as ‘grossly
offensive’) are so vague as to be unacceptably subjective in their application
by prosecution and the courts. The Law Commission endorses that criticism.<span style="mso-spacerun: yes;"> </span>It rejects as unworkable universal standards
for categories of speech, in favour of a factually context-specific harm-based
approach.<span style="mso-spacerun: yes;"> </span></span></p><p class="MsoNormal"><span style="font-family: georgia;">Yet a completely hypothetical interpretation
of the Law Commission’s proposed offence could require the court to carry out
an exercise – attributing characteristics to a notional member of a general audience
- as subjective as that for which the existing offences (or at least s.127) are
rightly criticised.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The Law Commission emphasises that “likely” harm means a
“real or substantial risk”, not a mere risk or possibility. But if the assumed
victim is a notional rather than an actual member of a general audience where does that lead, if not into the forbidden territory of
inviting the court to divine universal standards: a set of attributes with which a notional member of the audience has to be clothed?<o:p></o:p></span></p>
<p class="MsoNormal"><b><span style="font-family: georgia;">Claims to have suffered actual harm <o:p></o:p></span></b></p>
<p class="MsoNormal"><span style="font-family: georgia;">The converse of the Law Commission’s emphasis on “likely
harm” is that if someone claims to have suffered harm from encountering the
communication, or indeed proves that they actually have done so, that should
not be conclusive.<o:p></o:p></span></p><p class="MsoNormal"><span style="font-family: georgia;">In practice, as the Law Commission has acknowledged, evidence of actual harm to an actual person may count towards likelihood of harm (but may not be determinative). (Consultation Paper [5.90])</span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Thus the Law Commission states that “the mere fact that
someone was harmed does not imply that harm was likely … the jury or magistrate
will have to determine as a matter of fact that, at the point of sending, harm
was likely. If a person has an extreme and entirely unforeseeable reaction, the
element of likely harm will not be satisfied.” (Report [2.107])<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">However, the Law Commission has also rejected the suggestion that a reasonableness standard should be applied. The result appears to be that
if one person of unusual sensitivity, sufficient to be
at real or substantial risk of harm, is foreseeably likely to encounter the communication,
then the “likely audience” requirement would be satisfied. Hence the significance of the possible argument that the larger the audience of a public post, the more likely that it
may contain such a person.</span></p>
<p class="MsoNormal"><b><span style="font-family: georgia;">Insertion into an audience<o:p></o:p></span></b></p>
<p class="MsoNormal"><span style="font-family: georgia;">At the level of practical consequences, whichever
interpretation of the proposed offence is correct – actual or hypothetical
likely audience member – it appears to provide a route for someone to attempt to criminalise someone else’s controversial views by inserting themselves into a
likely audience.<span style="mso-spacerun: yes;"> </span>The Law Commission accepted
the possibility of this tactic (Report [2.153]), but considered that other
elements of the offence (the need to prove lack of reasonable excuse and intent
to harm) would constitute sufficient protection from criminalisation. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">However, whilst it discussed how a court might approach the
matter, the Report did not address in detail the possible deterrent effect on
continued communication, nor the interaction with the illegality provisions of
the draft Online Safety Bill.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">How might the tactic work? Let us assume a social media post
to a general audience, not about any one person, but expressing views with
which others may profoundly disagree – whether the subject matter be politics,
religion, or any other area in which some may claim to be traumatised by
views that they find repugnant.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Would such a communication be at risk of illegality if the
audience is likely to contain someone who would find what was said severely
distressing? The Law Commission’s answer is ‘No’: not because one sensitive person
in a general audience is not enough, but first of all because the necessary intent to
cause severe distress to a likely audience member would be lacking; and second, because ordinary (even if highly contentious) political discourse should count
as a contribution to a matter of public interest (Consultation Paper [5.185] – [5.187],
Report [2.152] – [2.153]).<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Nevertheless, it would be an easy matter for someone who
objects to the contents of the post to seek to put further communications at risk by entering
the conversation. One reply from someone who claims to be severely distressed
by the views expressed could create an increased risk (actual or
perceived) of committing the offence if the views were to be repeated. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">That would be the case whether ‘likely audience’ requires
the presence of an actual or hypothetical audience member. If it requires a
foreseeable actual audience member, one has now appeared. It could hardly
be suggested that, for the future, their presence is not foreseeable. The question
for the conduct element would be whether, as claimed, they would be likely to
be harmed.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">If, on the other hand, the “likely audience” is entirely
hypothetical, would an intervention by a real person claiming to be harmed make
any difference? There are two reasons to think that it could:<o:p></o:p></span></p>
<p class="MsoListParagraphCxSpFirst" style="margin-left: 18pt; mso-add-space: auto; mso-list: l0 level1 lfo1; text-indent: -18pt;"><!--[if !supportLists]--><span style="font-family: georgia;"><span style="mso-bidi-font-family: Georgia; mso-fareast-font-family: Georgia;"><span style="mso-list: Ignore;">1.<span style="font-size: 7pt; font-stretch: normal; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;">
</span></span></span><!--[endif]-->If there were any doubt that it was foreseeable
that the audience is likely to contain someone with that degree of sensitivity,
that doubt is dispelled. <o:p></o:p></span></p>
<p class="MsoListParagraphCxSpLast" style="margin-left: 18pt; mso-add-space: auto; mso-list: l0 level1 lfo1; text-indent: -18pt;"><!--[if !supportLists]--><span style="font-family: georgia;"><span style="mso-bidi-font-family: Georgia; mso-fareast-font-family: Georgia;"><span style="mso-list: Ignore;">2.<span style="font-size: 7pt; font-stretch: normal; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;">
</span></span></span><!--[endif]-->In practice, as the Law Commission has
acknowledged, evidence of actual harm to an actual person may count towards
likelihood of harm (but may not be determinative).<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">On either interpretation of the offence, any further
communications would be with knowledge of the audience member and their claim
to have been harmed. That would create a more concrete factual context for an argument
that likely harm resulting from any further communications was intentional.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Of course, if a further communication were to be prosecuted
and go to trial it still might not amount to an offence. The context would have
to be examined. Serious distress might not be established.<span style="mso-spacerun: yes;"> </span>The prosecution might not be able to prove
lack of reasonable excuse. Intent to harm might still not be established. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">But that is not really the significant issue where chilling effect is concerned. Rational apprehension of increased
risk of committing an offence, by virtue of crystallisation of a likely audience and the claim
to harm, would be capable of </span><span style="font-family: georgia;">creating a chilling effect on further communications</span><span style="font-family: georgia;">. </span></p><p class="MsoNormal"><span style="font-family: georgia;">The Law Commission may view the need to prove lack of reasonable
excuse and intent to harm as fundamental to a court’s consideration. However, someone
told that their potential criminal liability for future posts rests on those
two criteria might, rationally, see things less diffidently.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">If insertion into the audience has not chilled further communication,
a further tactical step could be to notify the platform and assert that they have reasonable
grounds to believe the continuing posts are illegal. Reasonable grounds (not
actual illegality, manifest illegality or even likely illegality) is the threshold that would trigger
the platform’s duty to take the posts down swiftly under S.9(3)(d) of the draft
Online Safety Bill. <o:p></o:p></span></p>
<p class="MsoNormal"><b><span style="font-family: georgia;">Conclusion<o:p></o:p></span></b></p>
<p class="MsoNormal"><span style="font-family: georgia;">The Law Commission’s proposal draws some inspiration from
legislation enacted in 2015 in New Zealand. That, too, is contextual and
harm-based. However, the New Zealand offence is firmly anchored in actual harm
to an actual identifiable person at whom the communication was targeted, and is qualified by an ‘ordinary reasonable person’
provision. The Law Commission has cut its recommended offence adrift from those
moorings. </span></p><p class="MsoNormal"><span style="font-family: georgia;">That has significant consequences for the scope of the conduct element
of the offence, especially when applied to public posts to a general audience. The
structure of the conduct element also lends itself to tactical chilling of speech. It is questionable whether these concerns would be sufficiently compensated
by the requirement to prove intent to harm and lack of reasonable excuse.</span><o:p></o:p></p><p class="MsoNormal"><span style="font-family: georgia;">[Unintended negative at end of section 'Psychological harm' corrected 4 Dec 2021; Updated 29 April 2022 to note omission of "reasonable grounds to believe" in Bill as introduced to Parliament; and 11 July 2022 to note introduction of "reasonable grounds to infer" by proposed government amendment at Report Stage.]</span></p><p class="MsoNormal"></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjhgh8eo6maeGTSI5fayB3eUuHc1mydlHZDJtZV3vpclRxeGlclFzfFgfCJE0fjVIJ2IzW9cW7rb7LrZtfBu87FGWstRbiksP8FJMsGA6lpKr9CFDWBBxq71caXhs3bLfIay-jpQPO9wHtv/s135/snip2.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="134" data-original-width="135" height="134" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjhgh8eo6maeGTSI5fayB3eUuHc1mydlHZDJtZV3vpclRxeGlclFzfFgfCJE0fjVIJ2IzW9cW7rb7LrZtfBu87FGWstRbiksP8FJMsGA6lpKr9CFDWBBxq71caXhs3bLfIay-jpQPO9wHtv/s0/snip2.png" width="135" /></a></div><br /><span style="font-family: georgia;"><br /></span><p></p>Cyberleaglehttp://www.blogger.com/profile/17507190182464072147noreply@blogger.com0tag:blogger.com,1999:blog-229721367671779922.post-61894840673437437942021-11-03T19:17:00.004+00:002021-11-04T08:13:13.802+00:00The draft Online Safety Bill concretised<p><b style="font-family: georgia; text-indent: -18pt;">A.<span style="font-size: 7pt; font-stretch: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-weight: normal; line-height: normal;">
</span></b><b style="font-family: georgia; text-indent: -18pt;">Introduction</b><span style="font-family: georgia;"> </span></p>
<p class="MsoListParagraphCxSpMiddle" style="margin-left: 18pt; mso-add-space: auto; mso-list: l2 level1 lfo1; text-indent: -18pt;"><!--[if !supportLists]--><span style="font-family: georgia;"><span style="mso-bidi-font-family: Georgia; mso-fareast-font-family: Georgia;"><span style="mso-list: Ignore;">1.<span style="font-size: 7pt; font-stretch: normal; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;">
</span></span></span><!--[endif]-->The draft Online Safety Bill is nothing if not
abstract. Whether it is defining the adult (or child) of ordinary
sensibilities, mandating proportionate systems and processes, or balancing
safety, privacy, and freedom of speech within the law, the draft Bill
resolutely eschews specifics. <span style="mso-spacerun: yes;"> </span></span></p>
<p class="MsoListParagraphCxSpMiddle" style="margin-left: 18pt; mso-add-space: auto; mso-list: l2 level1 lfo1; text-indent: -18pt;"><!--[if !supportLists]--><span style="font-family: georgia;"><span style="mso-bidi-font-family: Georgia; mso-fareast-font-family: Georgia;"><span style="mso-list: Ignore;">2.<span style="font-size: 7pt; font-stretch: normal; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;">
</span></span></span><!--[endif]-->The detailing of the draft Bill’s preliminary
design is to be executed in due course by secondary legislation, with Ofcom
guidance and Codes of Practice to follow. Even at that point, there is no
guarantee that the outcome would be clear rules that would enable a user to
determine on which side of the safety line any given item of content might
fall.</span></p>
<p class="MsoListParagraphCxSpMiddle" style="margin-left: 18pt; mso-add-space: auto; mso-list: l2 level1 lfo1; text-indent: -18pt;"><!--[if !supportLists]--><span style="font-family: georgia;"><span style="mso-bidi-font-family: Georgia; mso-fareast-font-family: Georgia;"><span style="mso-list: Ignore;">3.<span style="font-size: 7pt; font-stretch: normal; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;">
</span></span></span><!--[endif]-->Notwithstanding its abstract framing, the impact
of the draft Bill (should it become law) <a href="https://www.cyberleagle.com/2021/11/the-draft-online-safety-bill-systemic.html">would be on individual items of content posted by users</a>. But how can we evaluate that impact where legislation is
calculatedly abstract, and before any of the detail is painted in?</span></p>
<p class="MsoListParagraphCxSpMiddle" style="margin-left: 18pt; mso-add-space: auto; mso-list: l2 level1 lfo1; text-indent: -18pt;"><!--[if !supportLists]--><span style="font-family: georgia;"><span style="mso-bidi-font-family: Georgia; mso-fareast-font-family: Georgia;"><span style="mso-list: Ignore;">4.<span style="font-size: 7pt; font-stretch: normal; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;">
</span></span></span><!--[endif]-->We have to <a href="https://drive.google.com/file/d/1e4vSClZWin0wyG6PK68lzH7DtvjVfuZv/view">concretise the draft Bill’s abstractions</a>: test them against a hypothetical scenario and deduce (if we can)
what might result. This post is an attempt to do that.</span></p>
<p class="MsoListParagraphCxSpMiddle" style="margin-left: 18pt; mso-add-space: auto; mso-list: l1 level1 lfo3; text-indent: -18pt;"><!--[if !supportLists]--><span style="font-family: georgia;"><b><span style="mso-bidi-font-family: Georgia; mso-fareast-font-family: Georgia;"><span style="mso-list: Ignore;">B.<span style="font-size: 7pt; font-stretch: normal; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;">
</span></span></span></b><!--[endif]--><b>A concrete hypothetical</b></span></p>
<p class="MsoListParagraphCxSpMiddle" style="margin-left: 18pt; mso-add-space: auto;"><i><span style="font-family: georgia;">Our scenario concerns an amateur blogger who specialises in commenting
on the affairs of his local authority. He writes a series of blogposts (which
he also posts to his social media accounts) critical of a senior officer
of the local authority, who has previously made public a history of struggling
with mental health issues. The officer says that the posts have had an impact
on her mental health and that she has sought counselling.</span></i></p>
<p class="MsoListParagraphCxSpMiddle" style="margin-left: 18pt; mso-add-space: auto; mso-list: l2 level1 lfo1; text-indent: -18pt;"><!--[if !supportLists]--><span style="font-family: georgia;"><span style="mso-bidi-font-family: Georgia; mso-fareast-font-family: Georgia;"><span style="mso-list: Ignore;">5.<span style="font-size: 7pt; font-stretch: normal; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;">
</span></span></span><!--[endif]-->This hypothetical scenario is adapted from the <a href="https://www.bailii.org/ew/cases/EWHC/QB/2021/2012.html"><i>Sandwell Skidder </i>case</a>, in which a council officer brought civil
proceedings for harassment under the Protection from Harassment Act 1997 against
a local blogger, a self-proclaimed “citizen journalist”.</span></p>
<p class="MsoListParagraphCxSpMiddle" style="margin-left: 18pt; mso-add-space: auto; mso-list: l2 level1 lfo1; text-indent: -18pt;"><!--[if !supportLists]--><span style="font-family: georgia;"><span style="mso-bidi-font-family: Georgia; mso-fareast-font-family: Georgia;"><span style="mso-list: Ignore;">6.<span style="font-size: 7pt; font-stretch: normal; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;">
</span></span></span><!--[endif]-->The court described the posts in that case, although not
factually untrue, as a “series of unpleasant, personally critical
publications”. It emphasised that nothing in the judgment should be taken as
holding that the criticisms were justified. Nevertheless, and not doubting what
the council officer said about the impact on her, in a judgment running to 92
paragraphs the court held that the proceedings for harassment stood no
reasonable prospect of success and granted the blogger summary judgment.</span></p>
<p class="MsoListParagraphCxSpMiddle" style="margin-left: 18pt; mso-add-space: auto; mso-list: l2 level1 lfo1; text-indent: -18pt;"><!--[if !supportLists]--><span style="font-family: georgia;"><span style="mso-bidi-font-family: Georgia; mso-fareast-font-family: Georgia;"><span style="mso-list: Ignore;">7.<span style="font-size: 7pt; font-stretch: normal; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;">
</span></span></span><!--[endif]-->In several respects the facts and legal analysis
in the Sandwell Skidder judgment carry resonance for the duties that
the draft Bill would impose on a user to user (U2U) service provider:</span></p>
<p class="MsoListParagraphCxSpMiddle" style="margin-left: 54pt; mso-add-space: auto; mso-list: l2 level2 lfo1; text-indent: -18pt;"><!--[if !supportLists]--><span style="font-family: georgia;"><span style="mso-bidi-font-family: Georgia; mso-fareast-font-family: Georgia;"><span style="mso-list: Ignore;">a.<span style="font-size: 7pt; font-stretch: normal; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;">
</span></span></span><!--[endif]-->The claim of impact on mental health.<o:p></o:p></span></p>
<p class="MsoListParagraphCxSpMiddle" style="margin-left: 54pt; mso-add-space: auto; mso-list: l2 level2 lfo1; text-indent: -18pt;"><!--[if !supportLists]--><span style="font-family: georgia;"><span style="mso-bidi-font-family: Georgia; mso-fareast-font-family: Georgia;"><span style="mso-list: Ignore;">b.<span style="font-size: 7pt; font-stretch: normal; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;">
</span></span></span><!--[endif]-->The significance of context (including the seniority
of the council officer, the council officer’s own previous video describing her
struggle with mental health issues; and the legal requirement for there to have
been more than a single post by the defendant).<o:p></o:p></span></p>
<p class="MsoListParagraphCxSpMiddle" style="margin-left: 54pt; mso-add-space: auto; mso-list: l2 level2 lfo1; text-indent: -18pt;"><!--[if !supportLists]--><span style="font-family: georgia;"><span style="mso-bidi-font-family: Georgia; mso-fareast-font-family: Georgia;"><span style="mso-list: Ignore;">c.<span style="font-size: 7pt; font-stretch: normal; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;">
</span></span></span><!--[endif]-->The defendant being an amateur blogger rather
than a professional journalist (the court held that the journalistic nature of
the blog was what mattered, not the status of the person who wrote it).<o:p></o:p></span></p>
<p class="MsoListParagraphCxSpMiddle" style="margin-left: 54pt; mso-add-space: auto; mso-list: l2 level2 lfo1; text-indent: -18pt;"><!--[if !supportLists]--><span style="font-family: georgia;"><span style="mso-bidi-font-family: Georgia; mso-fareast-font-family: Georgia;"><span style="mso-list: Ignore;">d.<span style="font-size: 7pt; font-stretch: normal; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;">
</span></span></span><!--[endif]-->The legal requirement that liability for
harassment should be interpreted by reference to Art 10 ECHR.<o:p></o:p></span></p>
<p class="MsoListParagraphCxSpMiddle" style="margin-left: 54pt; mso-add-space: auto; mso-list: l2 level2 lfo1; text-indent: -18pt;"><!--[if !supportLists]--><span style="font-family: georgia;"><span style="mso-bidi-font-family: Georgia; mso-fareast-font-family: Georgia;"><span style="mso-list: Ignore;">e.<span style="font-size: 7pt; font-stretch: normal; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;">
</span></span></span><!--[endif]-->The significance for the freedom of expression
analysis of the case being one of publication to the world at large.<o:p></o:p></span></p>
<p class="MsoListParagraphCxSpMiddle" style="margin-left: 54pt; mso-add-space: auto; mso-list: l2 level2 lfo1; text-indent: -18pt;"><!--[if !supportLists]--><span style="font-family: georgia;"><span style="mso-bidi-font-family: Georgia; mso-fareast-font-family: Georgia;"><span style="mso-list: Ignore;">f.<span style="font-size: 7pt; font-stretch: normal; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;">
</span></span></span><!--[endif]-->The relevance that similar considerations would
have to the criminal offence of harassment under the 1997 Act.</span></p>
<p class="MsoListParagraphCxSpMiddle" style="margin-left: 18pt; mso-add-space: auto; mso-list: l2 level1 lfo1; text-indent: -18pt;"><!--[if !supportLists]--><span style="font-family: georgia;"><span style="mso-bidi-font-family: Georgia; mso-fareast-font-family: Georgia;"><span style="mso-list: Ignore;">8.<span style="font-size: 7pt; font-stretch: normal; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;">
</span></span></span><!--[endif]-->Our hypothetical potentially requires consideration of service
provider safety duties for <b>illegality</b> and (for a Category 1 service
provider) <b>content harmful to adults</b>. (Category 1 service providers would
be designated on the basis of being high risk by reason of size and
functionality.)</span></p>
<p class="MsoListParagraphCxSpMiddle" style="margin-left: 18pt; mso-add-space: auto; mso-list: l2 level1 lfo1; text-indent: -18pt;"><!--[if !supportLists]--><span style="font-family: georgia;"><span style="mso-bidi-font-family: Georgia; mso-fareast-font-family: Georgia;"><span style="mso-list: Ignore;">9.<span style="font-size: 7pt; font-stretch: normal; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;">
</span></span></span><!--[endif]-->The scenario would also engage service provider duties
in respect of some or all of <b>freedom of expression</b>, <b>privacy</b>, and
(for a Category 1 service provider) <b>journalistic content</b> and <b>content
of democratic importance</b>.</span></p>
<p class="MsoListParagraphCxSpMiddle" style="margin-left: 18pt; mso-add-space: auto; mso-list: l2 level1 lfo1; text-indent: -18pt;"><!--[if !supportLists]--><span style="font-family: georgia;"><span style="mso-bidi-font-family: Georgia; mso-fareast-font-family: Georgia;"><span style="mso-list: Ignore;">10.<span style="font-size: 7pt; font-stretch: normal; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;">
</span></span></span><!--[endif]-->We will assume, for simplicity, that the service
provider in question does not have to comply with the draft Bill’s “content
harmful to children” safety duty.</span></p>
<p class="MsoListParagraphCxSpMiddle" style="margin-left: 18pt; mso-add-space: auto; mso-list: l1 level1 lfo3; text-indent: -18pt;"><!--[if !supportLists]--><span style="font-family: georgia;"><b><span style="mso-bidi-font-family: Georgia; mso-fareast-font-family: Georgia;"><span style="mso-list: Ignore;">C.<span style="font-size: 7pt; font-stretch: normal; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;">
</span></span></span></b><!--[endif]--><b>The safety duties in summary</b></span></p>
<p class="MsoListParagraphCxSpMiddle" style="margin-left: 18pt; mso-add-space: auto; mso-list: l2 level1 lfo1; text-indent: -18pt;"><!--[if !supportLists]--><span style="font-family: georgia;"><span style="mso-bidi-font-family: Georgia; mso-fareast-font-family: Georgia;"><span style="mso-list: Ignore;">11.<span style="font-size: 7pt; font-stretch: normal; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;">
</span></span></span><!--[endif]-->The draft Bill’s <b>illegality safety duties</b>
are of two kinds: <b>proactive/preventative</b> and <b>reactive</b>.</span></p>
<p class="MsoListParagraphCxSpMiddle" style="margin-left: 18pt; mso-add-space: auto; mso-list: l2 level1 lfo1; text-indent: -18pt;"><!--[if !supportLists]--><span style="font-family: georgia;"><span style="mso-bidi-font-family: Georgia; mso-fareast-font-family: Georgia;"><span style="mso-list: Ignore;">12.<span style="font-size: 7pt; font-stretch: normal; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;">
</span></span></span><!--[endif]-->The general <b>proactive/preventative</b> safety
duties under S.9(3)(a) to (c) apply to <b>priority illegal content</b>
designated as such by secondary legislation. Although these duties do not
expressly stipulate monitoring and filtering, preventative systems and processes
are to some extent implicit in e.g. the duty to ‘minimise the presence of priority illegal
content’.</span></p>
<p class="MsoListParagraphCxSpMiddle" style="margin-left: 18pt; mso-add-space: auto; mso-list: l2 level1 lfo1; text-indent: -18pt;"><!--[if !supportLists]--><span style="font-family: georgia;"><span style="mso-bidi-font-family: Georgia; mso-fareast-font-family: Georgia;"><span style="mso-list: Ignore;">13.<span style="font-size: 7pt; font-stretch: normal; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;">
</span></span></span>It is noteworthy, however, that an Ofcom enforcement decision
cannot require steps to be taken “to use technology to identify a particular
kind of content present on the service with a view to taking down such content”
(S.83(11)).</span></p>
<p class="MsoListParagraphCxSpMiddle" style="margin-left: 18pt; mso-add-space: auto; mso-list: l2 level1 lfo1; text-indent: -18pt;"><!--[if !supportLists]--><span style="font-family: georgia;"><span style="mso-bidi-font-family: Georgia; mso-fareast-font-family: Georgia;"><span style="mso-list: Ignore;">14.<span style="font-size: 7pt; font-stretch: normal; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;">
</span></span></span><!--[endif]-->Our hypothetical will assume that criminally
harassing content has been designated as priority illegal content.</span></p>
<p class="MsoListParagraphCxSpMiddle" style="margin-left: 18pt; mso-add-space: auto; mso-list: l2 level1 lfo1; text-indent: -18pt;"><!--[if !supportLists]--><span style="font-family: georgia;"><span style="mso-bidi-font-family: Georgia; mso-fareast-font-family: Georgia;"><span style="mso-list: Ignore;">15.<span style="font-size: 7pt; font-stretch: normal; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;">
</span></span></span><!--[endif]-->The only explicitly <b>reactive duty</b> is
under S.9(3)(d), which applies to <b>all in-scope illegal content</b>. The duty
sits alongside the <a href="https://www.cyberleagle.com/2018/04/the-electronic-commerce-directive.html">hosting protection in the eCommerce Directive</a>, but cast as a
<a href="https://www.techdirt.com/articles/20200824/13595845171/intermediary-liability-responsibilities-post-brexit-graham-smith.shtml">positive obligation</a> to remove in-scope illegal content upon gaining awareness
of the presence of illegal content, rather than (as in the eCommerce Directive)
exposing the provider to potential liability under the relevant substantive
law. The knowledge threshold appears to be lower than that in the eCommerce
Directive.</span></p>
<p class="MsoListParagraphCxSpMiddle" style="margin-left: 18pt; mso-add-space: auto; mso-list: l2 level1 lfo1; text-indent: -18pt;"><!--[if !supportLists]--><span style="font-family: georgia;"><span style="mso-bidi-font-family: Georgia; mso-fareast-font-family: Georgia;"><span style="mso-list: Ignore;">16.<span style="font-size: 7pt; font-stretch: normal; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;">
</span></span></span><!--[endif]-->There is also a duty under S.9(2), applicable to
all in-scope illegality, to take “proportionate steps to mitigate and
effectively manage” risks of physical and psychological harm to individuals.
This is tied in some degree to the illegal content risk assessment that a
service provider is required to carry out. For simplicity, we shall consider
only the proactive and reactive illegality safety duties under S.9(3).</span></p>
<p class="MsoListParagraphCxSpMiddle" style="margin-left: 18pt; mso-add-space: auto; mso-list: l2 level1 lfo1; text-indent: -18pt;"><!--[if !supportLists]--><span style="font-family: georgia;"><span style="mso-bidi-font-family: Georgia; mso-fareast-font-family: Georgia;"><span style="mso-list: Ignore;">17.<span style="font-size: 7pt; font-stretch: normal; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;">
</span></span></span><!--[endif]--><b>Illegality</b> refers to certain types of
criminal offence set out in the draft Bill. They would include the harassment
offence under the 1997 Act.</span></p>
<p class="MsoListParagraphCxSpMiddle" style="margin-left: 18pt; mso-add-space: auto; mso-list: l2 level1 lfo1; text-indent: -18pt;"><!--[if !supportLists]--><span style="font-family: georgia;"><span style="mso-bidi-font-family: Georgia; mso-fareast-font-family: Georgia;"><span style="mso-list: Ignore;">18.<span style="font-size: 7pt; font-stretch: normal; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;">
</span></span></span><!--[endif]-->The illegality safety duties apply to user content
that the service provider has reasonable grounds to believe is illegal, even though it may not in fact be illegal. As the government has said in its <a href="https://committees.parliament.uk/work/745/freedom-of-expression-online/publications/">Response</a> to the
House of Lords Communications and Digital Committee Report on Freedom of
Expression in the Digital Age:</span></p>
<p class="MsoListParagraphCxSpMiddle"></p><blockquote><span style="font-family: georgia;">“<a name="_Hlk86742734">Platforms will need
to take action where they have reasonable grounds to believe that content
amounts to a relevant offence. They will need to ensure their content
moderation systems are able to decide whether something meets that test.”</a></span></blockquote><p></p>
<p class="MsoListParagraphCxSpMiddle" style="margin-left: 18pt; mso-add-space: auto; mso-list: l2 level1 lfo1; text-indent: -18pt;"><!--[if !supportLists]--><span style="font-family: georgia;"><span style="mso-bidi-font-family: Georgia; mso-fareast-font-family: Georgia;"><span style="mso-list: Ignore;">19.<span style="font-size: 7pt; font-stretch: normal; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;">
</span></span></span><!--[endif]-->That, under the draft Bill’s definition of
illegal content, applies not only to content actually present on the provider’s
service, but to kinds of content that may hypothetically be present on its
service in the future.</span></p>
<p class="MsoListParagraphCxSpMiddle" style="margin-left: 18pt; mso-add-space: auto; mso-list: l2 level1 lfo1; text-indent: -18pt;"><!--[if !supportLists]--><span style="font-family: georgia;"><span style="mso-bidi-font-family: Georgia; mso-fareast-font-family: Georgia;"><span style="mso-list: Ignore;">20.<span style="font-size: 7pt; font-stretch: normal; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;"> </span></span></span><!--[endif]-->That
would draw the service provider into some degree of predictive policing. It also
raises questions about the level of generality at which the draft Bill would
require predictions to be made and how those should translate into individual
decisions about concrete items of content.</span></p>
<p class="MsoListParagraphCxSpMiddle" style="margin-left: 18pt; mso-add-space: auto; mso-list: l2 level1 lfo1; text-indent: -18pt;"><!--[if !supportLists]--><span style="font-family: georgia;"><span style="mso-bidi-font-family: Georgia; mso-fareast-font-family: Georgia;"><span style="mso-list: Ignore;">21.<span style="font-size: 7pt; font-stretch: normal; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;">
</span></span></span><!--[endif]-->For example, would a complaint by a known person
about a known content source that passed the ‘reasonable grounds’ threshold concretise
the duty to minimise the presence of priority illegal content? Would that require
the source of the content, or content about the complainant, to be specifically
targeted by minimisation measures? This has similarities to the long running debate
about ‘stay-down’ obligations on service providers.</span></p>
<p class="MsoListParagraphCxSpMiddle" style="margin-left: 18pt; mso-add-space: auto; mso-list: l2 level1 lfo1; text-indent: -18pt;"><!--[if !supportLists]--><span style="font-family: georgia;"><span style="mso-bidi-font-family: Georgia; mso-fareast-font-family: Georgia;"><span style="mso-list: Ignore;">22.<span style="font-size: 7pt; font-stretch: normal; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;"> </span></span></span><!--[endif]-->The
question of the required level of generality or granularity, which also arises
in relation to the ‘content harmful to adults’ duty, necessitates close examination
of the provisions defining the safety duties and the risk assessment duties upon
which some aspects of the safety duties rest. It may be that there is not meant
to be one answer to the question; that it all comes down to proportionality, Ofcom
guidance and Codes of Practice. <span style="mso-spacerun: yes;"> </span>However,
even taking that into account, some aspects remain difficult to fit together
satisfactorily. If there is an obvious solution to those, no doubt someone will
point me to it.</span></p>
<p class="MsoListParagraphCxSpMiddle" style="margin-left: 18pt; mso-add-space: auto; mso-list: l2 level1 lfo1; text-indent: -18pt;"><!--[if !supportLists]--><span style="font-family: georgia;"><span style="mso-bidi-font-family: Georgia; mso-fareast-font-family: Georgia;"><span style="mso-list: Ignore;">23.<span style="font-size: 7pt; font-stretch: normal; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;"> </span></span></span><!--[endif]-->The
<b>“content harmful to adults” safety duty</b> requires a Category 1 service
provider to make clear in its terms and conditions how such content would be
dealt with and to apply those terms and conditions consistently. There is a question,
on the wording of the draft Bill, as to whether a service provider can state that
‘we do nothing about this kind of harmful content’. The government’s position
is understood to be that that would be permissible.</span></p>
<p class="MsoListParagraphCxSpMiddle" style="margin-left: 18pt; mso-add-space: auto; mso-list: l2 level1 lfo1; text-indent: -18pt;"><!--[if !supportLists]--><span style="font-family: georgia;"><span style="mso-bidi-font-family: Georgia; mso-fareast-font-family: Georgia;"><span style="mso-list: Ignore;">24.<span style="font-size: 7pt; font-stretch: normal; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;"> </span></span></span><!--[endif]-->The
government’s recent Response to the Lords Digital and Communications Committee
Report on Freedom of Expression in the Digital Age says:</span></p>
<p class="MsoListParagraphCxSpMiddle"></p><blockquote><span style="font-family: georgia;">“Where harmful misinformation and
disinformation does not cross the criminal threshold, the biggest platforms
(Category 1 services) will be required to set out what is and is not acceptable
on their services, and enforce the rules consistently. If platforms choose to
allow harmful content to be shared on their services, they should consider
other steps to mitigate the risk of harm to users, such as not amplifying such
content through recommendation algorithms or applying labels warning users
about the potential harm.”</span></blockquote><p></p>
<p class="MsoListParagraphCxSpMiddle" style="margin-left: 18pt; mso-add-space: auto; mso-list: l2 level1 lfo1; text-indent: -18pt;"><!--[if !supportLists]--><span style="font-family: georgia;"><span style="mso-bidi-font-family: Georgia; mso-fareast-font-family: Georgia;"><span style="mso-list: Ignore;">25.<span style="font-size: 7pt; font-stretch: normal; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;"> </span></span></span><!--[endif]-->If
the government means that considering those “other steps” forms part of the
Category 1 service provider’s duty, it is not obvious from where in the draft
Bill that might stem.</span></p>
<p class="MsoListParagraphCxSpMiddle" style="margin-left: 18pt; mso-add-space: auto; mso-list: l2 level1 lfo1; text-indent: -18pt;"><!--[if !supportLists]--><span style="font-family: georgia;"><span style="mso-bidi-font-family: Georgia; mso-fareast-font-family: Georgia;"><span style="mso-list: Ignore;">26.<span style="font-size: 7pt; font-stretch: normal; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;"> </span></span></span><!--[endif]-->In
fulfilling any kind of safety duty under the draft Bill a service provider
would be required to have regard to the importance of protecting users’ right
to freedom of expression within the law. Similarly it has to have regard to the
importance of protecting users from unwarranted infringements of privacy. (Parenthetically,
in the <i>Sandwell Skidder</i> case privacy was held not to be a significant factor in
view of the council officer’s own previous published video.)</span></p>
<p class="MsoListParagraphCxSpMiddle" style="margin-left: 18pt; mso-add-space: auto; mso-list: l2 level1 lfo1; text-indent: -18pt;"><!--[if !supportLists]--><span style="font-family: georgia;"><span style="mso-bidi-font-family: Georgia; mso-fareast-font-family: Georgia;"><span style="mso-list: Ignore;">27.<span style="font-size: 7pt; font-stretch: normal; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;"> </span></span></span><!--[endif]-->Category
1 providers would be under further duties to take into account the importance
of journalistic content and content of democratic importance when making
decisions about how to treat such content and whether to take action against a
user generating, uploading or sharing such content.</span></p>
<p class="MsoListParagraphCxSpLast" style="margin-left: 18pt; mso-add-space: auto; mso-list: l1 level1 lfo3; text-indent: -18pt;"><!--[if !supportLists]--><span style="font-family: georgia;"><b><span style="mso-bidi-font-family: Georgia; mso-fareast-font-family: Georgia;"><span style="mso-list: Ignore;">D.<span style="font-size: 7pt; font-stretch: normal; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;">
</span></span></span></b><!--[endif]--><b>Implementing the illegality safety duties
<o:p></o:p></b></span></p>
<p class="MsoNormal"><i><span style="font-family: georgia;">Proactive illegality duties: S.9(3)(a) to (c)<o:p></o:p></span></i></p>
<p class="MsoListParagraphCxSpFirst" style="margin-left: 18pt; mso-add-space: auto; mso-list: l2 level1 lfo1; text-indent: -18pt;"><!--[if !supportLists]--><span style="font-family: georgia;"><span style="mso-bidi-font-family: Georgia; mso-fareast-font-family: Georgia;"><span style="mso-list: Ignore;">28.<span style="font-size: 7pt; font-stretch: normal; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;"> </span></span></span><!--[endif]-->We
have assumed that secondary legislation has designated criminally harassing
content as priority illegal content. The provider has to have systems and
processes designed to minimise the presence of priority illegal content, the
length of time for which it is present, and the dissemination of such content.
Those systems could be automated, manual or both.</span></p>
<p class="MsoListParagraphCxSpMiddle" style="margin-left: 18pt; mso-add-space: auto; mso-list: l2 level1 lfo1; text-indent: -18pt;"><!--[if !supportLists]--><span style="font-family: georgia;"><span style="mso-bidi-font-family: Georgia; mso-fareast-font-family: Georgia;"><span style="mso-list: Ignore;">29.<span style="font-size: 7pt; font-stretch: normal; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;"> </span></span></span><!--[endif]-->That
general requirement has to be translated into an actual system or process making
actual decisions about actual content. The system would (presumably) have to
try to predict the variety of forms of harassment that might hypothetically be
present in the future, and detect and identify those that pass the illegality threshold
(reasonable grounds to believe that the content is criminally harassing).</span></p>
<p class="MsoListParagraphCxSpMiddle" style="margin-left: 18pt; mso-add-space: auto; mso-list: l2 level1 lfo1; text-indent: -18pt;"><!--[if !supportLists]--><span style="font-family: georgia;"><span style="mso-bidi-font-family: Georgia; mso-fareast-font-family: Georgia;"><span style="mso-list: Ignore;">30.<span style="font-size: 7pt; font-stretch: normal; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;"> </span></span></span><!--[endif]-->Simultaneously
it would have to try to avoid false positives that would result in the suppression of user
content falling short of that threshold. That would seem to follow from the
service provider’s duty to have regard to the importance of protecting users’
right to freedom of expression within the law. For Category 1 service providers
that may be reinforced by the journalistic content and content of democratic importance
duties. On the basis of the <i>Sandwell Skidder</i> judgment our hypothetical blog
should qualify at least as journalistic content.</span></p>
<p class="MsoListParagraphCxSpMiddle" style="margin-left: 18pt; mso-add-space: auto; mso-list: l2 level1 lfo1; text-indent: -18pt;"><!--[if !supportLists]--><span style="font-family: georgia;"><span style="mso-bidi-font-family: Georgia; mso-fareast-font-family: Georgia;"><span style="mso-list: Ignore;">31.<span style="font-size: 7pt; font-stretch: normal; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;">
</span></span></span><!--[endif]-->What would that involve in concrete terms?
First, the system or process would have to understand <b>what does and does not
constitute a criminal offence</b>. That would apply at least to human
moderators. Automated systems might be expected to do likewise. The S.9(3) duty
makes no distinction (albeit there appears to be tension between the proactive
provisions of S.9(3) and the limitation on Ofcom’s enforcement power in S.83(11)
(para 13 above)).</span></p>
<p class="MsoListParagraphCxSpMiddle" style="margin-left: 18pt; mso-add-space: auto; mso-list: l2 level1 lfo1; text-indent: -18pt;"><!--[if !supportLists]--><span style="font-family: georgia;"><span style="mso-bidi-font-family: Georgia; mso-fareast-font-family: Georgia;"><span style="mso-list: Ignore;">32.<span style="font-size: 7pt; font-stretch: normal; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;"> </span></span></span><!--[endif]-->Parenthetically,
where harassment is concerned not only the offence under the 1997 Act might
have to be understood. Hypothetical content could also have to be considered
under any other potentially applicable offences - the S.127 Communications Act
offences, say (or their possible replacement by a ‘psychological harm’ offence
as recommended by the Law Commission); and the common law offence of public
nuisance or its statutory replacement under the Policing Bill currently going
through Parliament.</span></p>
<p class="MsoListParagraphCxSpMiddle" style="margin-left: 18pt; mso-add-space: auto; mso-list: l2 level1 lfo1; text-indent: -18pt;"><!--[if !supportLists]--><span style="font-family: georgia;"><span style="mso-bidi-font-family: Georgia; mso-fareast-font-family: Georgia;"><span style="mso-list: Ignore;">33.<span style="font-size: 7pt; font-stretch: normal; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;"> </span></span></span><!--[endif]-->It
is worth considering, by reference to some extracts from the caselaw, what understanding
the 1997 Act harassment offence might involve:</span></p>
<p class="MsoListParagraphCxSpMiddle" style="mso-list: l0 level1 lfo2; text-indent: -18pt;"></p><ul style="text-align: left;"><li><span style="font-family: georgia;"><span style="mso-bidi-font-family: Georgia; mso-fareast-font-family: Georgia;"><span style="mso-list: Ignore;"><span style="font-size: 7pt; font-stretch: normal; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;"> </span></span></span><!--[endif]-->There is no statutory definition of harassment.
It “was left deliberately wide and open-ended” (<i>Majrowski v Guy’s and
Thomas’s NHS Trust</i> [2006] ICR 1999)</span></li></ul><p></p>
<p class="MsoListParagraphCxSpMiddle" style="mso-list: l0 level1 lfo2; text-indent: -18pt;"></p><ul style="text-align: left;"><li><span style="font-family: georgia;"><span style="mso-bidi-font-family: Georgia; mso-fareast-font-family: Georgia;"><span style="mso-list: Ignore;"><span style="font-size: 7pt; font-stretch: normal; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;"> </span></span></span><!--[endif]-->The conduct must cross “the boundary from the
regrettable to the unacceptable” (<i>ibid</i>)</span></li></ul><p></p>
<p class="MsoListParagraphCxSpMiddle" style="mso-list: l0 level1 lfo2; text-indent: -18pt;"></p><ul style="text-align: left;"><li><span style="font-family: georgia;"><span style="mso-bidi-font-family: Georgia; mso-fareast-font-family: Georgia;"><span style="mso-list: Ignore;"><span style="font-size: 7pt; font-stretch: normal; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;"> </span></span></span><!--[endif]-->“… courts will have in mind that irritations,
annoyances, even a measure of upset, arise at times in everybody’s day-to-day
dealings with other people. Courts are well able to recognise the boundary
between conduct which is unattractive, even unreasonable, and conduct which is
oppressive and unacceptable” (<i>ibid</i>)</span></li></ul><p></p>
<p class="MsoListParagraphCxSpMiddle" style="mso-list: l0 level1 lfo2; text-indent: -18pt;"></p><ul style="text-align: left;"><li><span style="font-family: georgia;"><span style="mso-bidi-font-family: Georgia; mso-fareast-font-family: Georgia;"><span style="mso-list: Ignore;"><span style="font-size: 7pt; font-stretch: normal; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;"> </span></span></span><!--[endif]-->Reference in the Act to alarming the person or
causing distress is not a definition; it is merely guidance as to one element.
(<i>Hayes v Willoughby</i> [2013] 1 WLR 935).</span></li></ul><p></p>
<p class="MsoListParagraphCxSpMiddle" style="mso-list: l0 level1 lfo2; text-indent: -18pt;"></p><ul style="text-align: left;"><li><span style="font-family: georgia;"><span style="mso-bidi-font-family: Georgia; mso-fareast-font-family: Georgia;"><span style="mso-list: Ignore;"><span style="font-size: 7pt; font-stretch: normal; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;"> </span></span></span><!--[endif]-->“It would be a serious interference with freedom
of expression if those wishing to express their own views could be silenced by,
or threatened with, claims for harassment based on <u>subjective</u> claims by
individuals that they feel offended or insulted.” (<i>Trimingham v Associated
Newspapers Ltd</i> [2012] EWHC 1296)</span></li></ul><p></p>
<p class="MsoListParagraphCxSpMiddle" style="mso-list: l0 level1 lfo2; text-indent: -18pt;"></p><ul style="text-align: left;"><li><span style="font-family: georgia;"><span style="mso-bidi-font-family: Georgia; mso-fareast-font-family: Georgia;"><span style="mso-list: Ignore;"><span style="font-size: 7pt; font-stretch: normal; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;"> </span></span></span><!--[endif]-->“When Article 10 [ECHR] is engaged then the
Court must apply an intense focus on the relevant competing rights… .
Harassment by speech cases are usually highly fact- and context-specific.” (<i>Canada
Goose v Persons Unknown</i> [2019] EWHC 2459)</span></li></ul><p></p>
<p class="MsoListParagraphCxSpMiddle" style="mso-list: l0 level1 lfo2; text-indent: -18pt;"></p><ul style="text-align: left;"><li><span style="font-family: georgia;"><span style="mso-bidi-font-family: Georgia; mso-fareast-font-family: Georgia;"><span style="mso-list: Ignore;"><span style="font-size: 7pt; font-stretch: normal; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;"> </span></span></span><!--[endif]-->“The real question is whether the conduct
complained of has extra elements of oppression, persistence and unpleasantness
and therefore crosses the line… . There may be a further question, which is
whether the content of statements can be distinguished from their mode of
delivery.” (<i>Merlin Entertainments v Cave</i> [2014] EWHC 3036)</span></li></ul><p></p>
<p class="MsoListParagraphCxSpMiddle" style="mso-list: l0 level1 lfo2; text-indent: -18pt;"></p><ul style="text-align: left;"><li><span style="font-family: georgia;"><span style="mso-bidi-font-family: Georgia; mso-fareast-font-family: Georgia;"><span style="mso-list: Ignore;"><span style="font-size: 7pt; font-stretch: normal; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;"> </span></span></span><!--[endif]-->“[P]ublication to the world at large engages the
core of the right to freedom of expression. … In the social media context it
can be more difficult to distinguish between speech which is “targeted” at an
individual and speech that is published to the world at large.” (<i>McNally v
Saunders</i> [2021] EWHC 2012)</span></li></ul><p></p>
<p class="MsoListParagraphCxSpMiddle" style="margin-left: 18pt; mso-add-space: auto; mso-list: l2 level1 lfo1; text-indent: -18pt;"><!--[if !supportLists]--><span style="font-family: georgia;"><span style="mso-bidi-font-family: Georgia; mso-fareast-font-family: Georgia;"><span style="mso-list: Ignore;">34.<span style="font-size: 7pt; font-stretch: normal; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;"> </span></span></span>Harassment under the 1997 Act is thus a highly nuanced concept - less of a bright line
rule that can be translated into an algorithm and more of an exercise in
balancing different rights and interests against background factual context –
something that even the courts do not find easy.</span></p>
<p class="MsoListParagraphCxSpMiddle" style="margin-left: 18pt; mso-add-space: auto; mso-list: l2 level1 lfo1; text-indent: -18pt;"><!--[if !supportLists]--><span style="font-family: georgia;"><span style="mso-bidi-font-family: Georgia; mso-fareast-font-family: Georgia;"><span style="mso-list: Ignore;">35.<span style="font-size: 7pt; font-stretch: normal; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;"> </span></span></span><!--[endif]-->For
the harassment offence the task of identifying criminal content on a U2U
service is complicated by the central importance of <b>context and repetition</b>.
The potential relevance of external context is illustrated by the claimant’s prior
published video in the <i>Sandwell Skidder</i> case. A service provider’s systems are
unlikely to be aware of relevant external context.</span></p>
<p class="MsoListParagraphCxSpMiddle" style="margin-left: 18pt; mso-add-space: auto; mso-list: l2 level1 lfo1; text-indent: -18pt;"><!--[if !supportLists]--><span style="font-family: georgia;"><span style="mso-bidi-font-family: Georgia; mso-fareast-font-family: Georgia;"><span style="mso-list: Ignore;">36.<span style="font-size: 7pt; font-stretch: normal; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;"> </span></span></span><!--[endif]-->As
to repetition, initially lawful conduct may become unlawful as the result of
the manner in which it is pursued and its persistence. That is because the
harassment offence requires, in the case of conduct in relation to a single
person, conduct on at least two occasions in relation to that person. That is a
bright line rule. One occasion is not enough.</span></p>
<p class="MsoListParagraphCxSpMiddle" style="margin-left: 18pt; mso-add-space: auto; mso-list: l2 level1 lfo1; text-indent: -18pt;"><!--[if !supportLists]--><span style="font-family: georgia;"><span style="mso-bidi-font-family: Georgia; mso-fareast-font-family: Georgia;"><span style="mso-list: Ignore;">37.<span style="font-size: 7pt; font-stretch: normal; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;">
</span></span></span><!--[endif]-->It would seem, therefore, to be logically
impossible for a proactive moderation system to detect a single
post and validly determine that it amounts to criminal harassment, or even that
there are reasonable grounds to believe that it does. The system would have to have
detected and considered together, or perhaps inferred the existence of, more
than one harassing post.</span></p>
<p class="MsoListParagraphCxSpLast" style="margin-left: 18pt; mso-add-space: auto; mso-list: l2 level1 lfo1; text-indent: -18pt;"><!--[if !supportLists]--><span style="font-family: georgia;"><span style="mso-bidi-font-family: Georgia; mso-fareast-font-family: Georgia;"><span style="mso-list: Ignore;">38.<span style="font-size: 7pt; font-stretch: normal; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;"> </span></span></span><!--[endif]-->The
court in the <i>Sandwell Skidder</i> case devoted 92 paragraphs of judgment to describing
the facts, the law, and weighing up whether the Sandwell Skidder’s posts
amounted to harassment under the 1997 Act. That luxury would not be available to the proactive
detection and moderation systems apparently envisaged by the draft Bill, at least to the extent that - unlike a
court - they would have to operate at scale and in real or near-real time. <o:p></o:p></span></p>
<p class="MsoNormal"><i><span style="font-family: georgia;">Reactive illegality duty: S.9(3)(d)<o:p></o:p></span></i></p>
<p class="MsoListParagraphCxSpFirst" style="margin-left: 18pt; mso-add-space: auto; mso-list: l2 level1 lfo1; text-indent: -18pt;"><!--[if !supportLists]--><span style="font-family: georgia;"><span style="mso-bidi-font-family: Georgia; mso-fareast-font-family: Georgia;"><span style="mso-list: Ignore;">39.<span style="font-size: 7pt; font-stretch: normal; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;"> </span></span></span><!--[endif]-->The
reactive duty on the service provider under S.9(3)(d) is to have proportionate
systems and processes in place designed to: “where [it] is alerted by a person
to the presence of any illegal content, or becomes aware of it in any other
way, swiftly take down such content”.</span></p>
<p class="MsoListParagraphCxSpMiddle" style="margin-left: 18pt; mso-add-space: auto; mso-list: l2 level1 lfo1; text-indent: -18pt;"><!--[if !supportLists]--><span style="font-family: georgia;"><span style="mso-bidi-font-family: Georgia; mso-fareast-font-family: Georgia;"><span style="mso-list: Ignore;">40.<span style="font-size: 7pt; font-stretch: normal; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;"> </span></span></span><!--[endif]-->Let
us assume that the service provider’s proactive illegality systems and
processes have not already suppressed references to our citizen journalist’s
blogposts. Suppose that, instead of taking a harassment complaint to court, the
subject of the blogposts complains to the service provider. What happens then?</span></p>
<p class="MsoListParagraphCxSpMiddle" style="margin-left: 18pt; mso-add-space: auto; mso-list: l2 level1 lfo1; text-indent: -18pt;"><!--[if !supportLists]--><span style="font-family: georgia;"><span style="mso-bidi-font-family: Georgia; mso-fareast-font-family: Georgia;"><span style="mso-list: Ignore;">41.<span style="font-size: 7pt; font-stretch: normal; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;">
</span></span></span><!--[endif]-->In terms of knowledge and understanding of the
law of criminal harassment, nothing differs from the proactive duties. From a
factual perspective, the complainant may well have provided the service
provider with more context as seen from the complainant’s perspective.</span></p>
<p class="MsoListParagraphCxSpMiddle" style="margin-left: 18pt; mso-add-space: auto; mso-list: l2 level1 lfo1; text-indent: -18pt;"><!--[if !supportLists]--><span style="font-family: georgia;"><span style="mso-bidi-font-family: Georgia; mso-fareast-font-family: Georgia;"><span style="mso-list: Ignore;">42.<span style="font-size: 7pt; font-stretch: normal; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;"> </span></span></span><!--[endif]-->As
with the proactive duties, the threshold that triggers the reactive takedown duty
is not awareness that the content is actually illegal. If there are reasonable
grounds to believe that use or dissemination of the content amounts to a
relevant criminal offence, the service provider is positively obliged to have a
system or process designed to take it down swiftly.</span></p>
<p class="MsoListParagraphCxSpMiddle" style="margin-left: 18pt; mso-add-space: auto; mso-list: l2 level1 lfo1; text-indent: -18pt;"><!--[if !supportLists]--><span style="font-family: georgia;"><span style="mso-bidi-font-family: Georgia; mso-fareast-font-family: Georgia;"><span style="mso-list: Ignore;">43.<span style="font-size: 7pt; font-stretch: normal; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;"> </span></span></span><!--[endif]-->At
the same time, however, it is required to have regard to the importance of
freedom of expression within the law (and, if a Category 1 service provider, to
take into account the importance of journalistic content and content of
democratic importance).</span></p>
<p class="MsoListParagraphCxSpMiddle" style="margin-left: 18pt; mso-add-space: auto; mso-list: l2 level1 lfo1; text-indent: -18pt;"><!--[if !supportLists]--><span style="font-family: georgia;"><span style="mso-bidi-font-family: Georgia; mso-fareast-font-family: Georgia;"><span style="mso-list: Ignore;">44.<span style="font-size: 7pt; font-stretch: normal; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;"> </span></span></span><!--[endif]-->Apart
from the reduced threshold for illegality the exercise demanded of a service
provider at this point is essentially that of a court. The fact that the
service provider might not be sanctioned by the regulator for coming to an
individual decision which the regulator did not agree with (<a href="https://www.cyberleagle.com/2021/11/the-draft-online-safety-bill-systemic.html">see here</a>) does not
detract from the essentially judicial role that the draft Bill would impose on
the service provider.</span><span style="font-family: georgia;"> </span></p>
<p class="MsoListParagraphCxSpMiddle" style="margin-left: 18pt; mso-add-space: auto; mso-list: l1 level1 lfo3; text-indent: -18pt;"><!--[if !supportLists]--><span style="font-family: georgia;"><b><span style="mso-bidi-font-family: Georgia; mso-fareast-font-family: Georgia;"><span style="mso-list: Ignore;">E.<span style="font-size: 7pt; font-stretch: normal; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;">
</span></span></span></b><!--[endif]--><b>Implementing the ‘content harmful to
adults’ safety duty</b></span></p>
<p class="MsoListParagraphCxSpMiddle" style="margin-left: 18pt; mso-add-space: auto; mso-list: l2 level1 lfo1; text-indent: -18pt;"><!--[if !supportLists]--><span style="font-family: georgia;"><span style="mso-bidi-font-family: Georgia; mso-fareast-font-family: Georgia;"><span style="mso-list: Ignore;">45.<span style="font-size: 7pt; font-stretch: normal; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;"> </span></span></span><!--[endif]-->Category
1 services would be under a safety duty in respect of ‘<b>content harmful to
adults</b>’.</span></p>
<p class="MsoListParagraphCxSpMiddle" style="margin-left: 18pt; mso-add-space: auto; mso-list: l2 level1 lfo1; text-indent: -18pt;"><!--[if !supportLists]--><span style="font-family: georgia;"><span style="mso-bidi-font-family: Georgia; mso-fareast-font-family: Georgia;"><span style="mso-list: Ignore;">46.<span style="font-size: 7pt; font-stretch: normal; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;"> </span></span></span><!--[endif]-->What
is ‘<b>content harmful to adults</b>’? It comes in two versions: <b>priority</b>
and <b>non-priority</b>. The Secretary of State is able (under a peculiar
regulation-making power that on the face of it is not limited to physical or
psychological harm) to designate harassing content (whether or not illegal) as <b>priority
content harmful to adults</b>.</span></p>
<p class="MsoListParagraphCxSpMiddle" style="margin-left: 18pt; mso-add-space: auto; mso-list: l2 level1 lfo1; text-indent: -18pt;"><!--[if !supportLists]--><span style="font-family: georgia;"><span style="mso-bidi-font-family: Georgia; mso-fareast-font-family: Georgia;"><span style="mso-list: Ignore;">47.<span style="font-size: 7pt; font-stretch: normal; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;"> </span></span></span><!--[endif]-->Content
is <b>non-priority content harmful to adults</b> if its nature is such that
“there is a material risk of the content having, or indirectly having, a <b>significant
adverse physical or psychological impact</b> on an <b>adult of ordinary
sensibilities</b>”.<span style="mso-spacerun: yes;"> </span>A series of
sub-definitions drills down to characteristics and sensibilities of groups of
people, and then to those of known individuals. Non-priority content harmful to adults cannot also be illegal content (S.46(8)(a)).</span></p>
<p class="MsoListParagraphCxSpMiddle" style="margin-left: 18pt; mso-add-space: auto; mso-list: l2 level1 lfo1; text-indent: -18pt;"><!--[if !supportLists]--><span style="font-family: georgia;"><span style="mso-bidi-font-family: Georgia; mso-fareast-font-family: Georgia;"><span style="mso-list: Ignore;">48.<span style="font-size: 7pt; font-stretch: normal; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;"> </span></span></span><!--[endif]-->Whether
the content be priority or non-priority, the Category 1 service provider has to explain
clearly and accessibly in its terms and conditions how it would deal with
actual content of that kind; and then apply those terms and conditions
consistently (S.11).</span></p>
<p class="MsoListParagraphCxSpMiddle" style="margin-left: 18pt; mso-add-space: auto; mso-list: l2 level1 lfo1; text-indent: -18pt;"><!--[if !supportLists]--><span style="font-family: georgia;"><span style="mso-bidi-font-family: Georgia; mso-fareast-font-family: Georgia;"><span style="mso-list: Ignore;">49.<span style="font-size: 7pt; font-stretch: normal; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;"> </span></span></span><!--[endif]-->As already mentioned (para 23), the extent of the ‘content harmful to adults’<b> </b>duty is
debatable. ‘How’ could imply that such content should be dealt with in some
way. The government’s intention is understood to be that the duty is transparency-only,
so that the service provider is free to state in its terms and conditions that
it does nothing about such content.</span></p>
<p class="MsoListParagraphCxSpMiddle" style="margin-left: 18pt; mso-add-space: auto; mso-list: l2 level1 lfo1; text-indent: -18pt;"><!--[if !supportLists]--><span style="font-family: georgia;"><span style="mso-bidi-font-family: Georgia; mso-fareast-font-family: Georgia;"><span style="mso-list: Ignore;">50.<span style="font-size: 7pt; font-stretch: normal; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;"> </span></span></span><!--[endif]-->Even
on that basis, the practical question arises of how general or specific the
descriptions of harmful content in the terms and conditions have to be. Priority
content could probably be addressed at the generic level of kinds of priority
content designated in secondary legislation. Whether our hypothetical blogpost
would fall within any of those categories would depend on how harassing content
had been described in secondary legislation – for instance, whether a course
of conduct was stipulated, as with the criminal offence.</span></p>
<p class="MsoListParagraphCxSpMiddle" style="margin-left: 18pt; mso-add-space: auto; mso-list: l2 level1 lfo1; text-indent: -18pt;"><!--[if !supportLists]--><span style="font-family: georgia;"><span style="mso-bidi-font-family: Georgia; mso-fareast-font-family: Georgia;"><span style="mso-list: Ignore;">51.<span style="font-size: 7pt; font-stretch: normal; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;">
</span></span></span><!--[endif]-->The question of level of generality is much less
easy to answer for non-priority content. For instance, the element of the ‘non-priority
content harmful to adults’ definition that concerns known adults appears to
have no discernible function in the draft Bill unless it in some way affects the Category 1 service provider’s ‘terms and conditions’ duty. Yet if it does have an effect
of that kind, it is difficult to see what that could be intended to be.</span></p>
<p class="MsoListParagraphCxSpMiddle" style="margin-left: 18pt; mso-add-space: auto; mso-list: l2 level1 lfo1; text-indent: -18pt;"><!--[if !supportLists]--><span style="font-family: georgia;"><span style="mso-bidi-font-family: Georgia; mso-fareast-font-family: Georgia;"><span style="mso-list: Ignore;">52.<span style="font-size: 7pt; font-stretch: normal; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;"> </span></span></span><!--[endif]-->The
fictional character of the “<b>adult of ordinary sensibilities</b>” (see <a href="https://www.cyberleagle.com/2021/06/on-trail-of-person-of-ordinary.html">here</a>
for a detailed discussion of this concept and its antecedents) sets out initially
to define an objective standard for adverse psychological impact (albeit the
sub-definitions progressively move away from that). An objective standard aims
to address the problem of someone subjectively claiming to have suffered harm from
reading or viewing material. That carries the risk of embedding the sensitivities
of the most easily offended reader.</span></p>
<p class="MsoListParagraphCxSpMiddle" style="margin-left: 18pt; mso-add-space: auto; mso-list: l2 level1 lfo1; text-indent: -18pt;"><!--[if !supportLists]--><span style="font-family: georgia;"><span style="mso-bidi-font-family: Georgia; mso-fareast-font-family: Georgia;"><span style="mso-list: Ignore;">53.<span style="font-size: 7pt; font-stretch: normal; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;"> </span></span></span><!--[endif]-->For
non-priority content harmful to adults, the S.11 duty kicks in if harassing
content has been identified as a risk in the “adult’s risk assessment” that a
Category 1 service provider is required to undertake. As with illegal content,
content harmful to adults includes content hypothetically present on the
system.</span></p>
<p class="MsoListParagraphCxSpMiddle" style="margin-left: 18pt; mso-add-space: auto; mso-list: l2 level1 lfo1; text-indent: -18pt;"><!--[if !supportLists]--><span style="font-family: georgia;"><span style="mso-bidi-font-family: Georgia; mso-fareast-font-family: Georgia;"><span style="mso-list: Ignore;">54.<span style="font-size: 7pt; font-stretch: normal; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;"> </span></span></span><!--[endif]-->This
relationship creates the conundrum that the higher the level of abstraction at
which the adults’ risk assessment is conducted, the greater the gap that has to
bridged when translating to actual content; alternatively, if risk assessment
is conducted at a more granular and concrete level, for instance down to known
content sources and known individuals who are the subject of online content, it
could rapidly multiply into unfeasibility.</span></p>
<p class="MsoListParagraphCxSpMiddle" style="margin-left: 18pt; mso-add-space: auto; mso-list: l2 level1 lfo1; text-indent: -18pt;"><!--[if !supportLists]--><span style="font-family: georgia;"><span style="mso-bidi-font-family: Georgia; mso-fareast-font-family: Georgia;"><span style="mso-list: Ignore;">55.<span style="font-size: 7pt; font-stretch: normal; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;">
</span></span></span><!--[endif]-->So, what happens if the Category 1 service provider
is aware of a specific blog, or of specific content contained in a blog, or of
a specific person who is the subject of posts in the blog, that has been posted to its service? Would that affect
how it had to fulfil its duties in relation to content harmful to adults?</span></p>
<p class="MsoListParagraphCxSpMiddle" style="margin-left: 18pt; mso-add-space: auto; mso-list: l2 level1 lfo1; text-indent: -18pt;"><!--[if !supportLists]--><span style="font-family: georgia;"><span style="mso-bidi-font-family: Georgia; mso-fareast-font-family: Georgia;"><span style="mso-list: Ignore;">56.<span style="font-size: 7pt; font-stretch: normal; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;"> </span></span></span><!--[endif]-->Take
first a <b>known blog</b> and consider the service provider’s <b>transparency</b> duty. Does the service
provider have to explain in its terms and conditions how content from
individually identified user sources is to be dealt with? On the face of it that
would appear to be a strange result. However, the transparency duty and its
underlying risk assessment duty are framed by means of an uneasy combination of
references to ‘kind’ of content and ‘content’, which leaves the intended levels
of generality or granularity difficult to discern.</span></p>
<p class="MsoListParagraphCxSpMiddle" style="margin-left: 18pt; mso-add-space: auto; mso-list: l2 level1 lfo1; text-indent: -18pt;"><!--[if !supportLists]--><span style="font-family: georgia;"><span style="mso-bidi-font-family: Georgia; mso-fareast-font-family: Georgia;"><span style="mso-list: Ignore;">57.<span style="font-size: 7pt; font-stretch: normal; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;">
</span></span></span><!--[endif]-->The obvious response to this kind of issue may
be that a service provider is required only to put in place proportionate
systems and processes. That, however, provides no clear answer to the concrete
question that the service provider would face: do I have to name any specific
content sources in my terms and conditions and explain how they will be dealt
with; if so, how do I decide which?</span></p>
<p class="MsoListParagraphCxSpMiddle" style="margin-left: 18pt; mso-add-space: auto; mso-list: l2 level1 lfo1; text-indent: -18pt;"><!--[if !supportLists]--><span style="font-family: georgia;"><span style="mso-bidi-font-family: Georgia; mso-fareast-font-family: Georgia;"><span style="mso-list: Ignore;">58.<span style="font-size: 7pt; font-stretch: normal; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;"> </span></span></span><!--[endif]-->Turning
now to a <b>known subject of a blog</b>, unlike for known content sources the
draft Bill contains some specific, potentially relevant, provisions. It expressly provides
that where the service provider knows of a particular adult who is the subject
of user content on its service, or to whom it knows that such content is
directed, it is that adult’s sensibilities and characteristics that are
relevant. The legal fiction of the objective adult of ordinary sensibilities is
replaced by the actual subject of the blogpost.</span></p>
<p class="MsoListParagraphCxSpMiddle" style="margin-left: 18pt; mso-add-space: auto; mso-list: l2 level1 lfo1; text-indent: -18pt;"><!--[if !supportLists]--><span style="font-family: georgia;"><span style="mso-bidi-font-family: Georgia; mso-fareast-font-family: Georgia;"><span style="mso-list: Ignore;">59.<span style="font-size: 7pt; font-stretch: normal; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;"> </span></span></span><!--[endif]-->So
in the case of our hypothetical blog, once the council officer complains to the
service provider, the service provider knows of the complainant’s identity and
also, crucially, knows of the assertion that they have suffered psychological
harm as a result of the content on their service.</span></p>
<p class="MsoListParagraphCxSpMiddle" style="margin-left: 18pt; mso-add-space: auto; mso-list: l2 level1 lfo1; text-indent: -18pt;"><!--[if !supportLists]--><span style="font-family: georgia;"><span style="mso-bidi-font-family: Georgia; mso-fareast-font-family: Georgia;"><span style="mso-list: Ignore;">60.<span style="font-size: 7pt; font-stretch: normal; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;"> </span></span></span><!--[endif]-->The
service provider’s duty is triggered not by establishing actual psychological
harm, but by reasonable grounds to believe that there is a material risk of the
content having a significant adverse physical or psychological impact. Let us assume that the service provider has concluded that its ‘harmful to adults’ duty is at least arguably
triggered. What does the service provider have to do?</span></p>
<p class="MsoListParagraphCxSpMiddle" style="margin-left: 18pt; mso-add-space: auto; mso-list: l2 level1 lfo1; text-indent: -18pt;"><!--[if !supportLists]--><span style="font-family: georgia;"><span style="mso-bidi-font-family: Georgia; mso-fareast-font-family: Georgia;"><span style="mso-list: Ignore;">61.<span style="font-size: 7pt; font-stretch: normal; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;">
</span></span></span><!--[endif]-->As with a known blog or blogpost, focusing the duty to the level of a known person raises the question: does the service provider
have to state in its terms and conditions how posts about, or directed at, that
named person will be dealt with? Does it have to incorporate a list of such
known persons in its terms and conditions? It is hard to believe that that is the government’s
intention. Yet combining the Category 1 safety duty under S.11(2)(b) with the
individualised version of the 'adult of ordinary sensibilities' appears to lean
in that direction.</span></p>
<p class="MsoListParagraphCxSpMiddle" style="margin-left: 18pt; mso-add-space: auto; mso-list: l2 level1 lfo1; text-indent: -18pt;"><!--[if !supportLists]--><span style="font-family: georgia;"><span style="mso-bidi-font-family: Georgia; mso-fareast-font-family: Georgia;"><span style="mso-list: Ignore;">62.<span style="font-size: 7pt; font-stretch: normal; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;"> </span></span></span><!--[endif]-->If
that is not the consequence, and if the Category 1 duty in relation to content
harmful to adults is ‘transparency-only’, then how (if at all) would the ‘known person’
provision of the draft Bill affect what the service provider is required to do? What function does it perform? If the ‘known person’ provision does have
some kind of substantive consequence, what might that be? That may raise the
question whether someone who claims to be at risk of significant adverse
psychological impact from the activities of a blogger could exercise some
degree of personal veto or some other kind of control over dissemination of the posts.</span></p>
<p class="MsoListParagraphCxSpMiddle" style="margin-left: 18pt; mso-add-space: auto; mso-list: l2 level1 lfo1; text-indent: -18pt;"><!--[if !supportLists]--><span style="font-family: georgia;"><span style="mso-bidi-font-family: Georgia; mso-fareast-font-family: Georgia;"><span style="mso-list: Ignore;">63.<span style="font-size: 7pt; font-stretch: normal; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;"> </span></span></span><!--[endif]-->Whatever the answer may be to the difficult questions that the draft Bill poses, what it evidently does do is propel service providers into a more central role in
determining controversies: all in scope service providers where a
decision has to be made as to whether there are reasonable grounds to believe
that the content is illegal, or presents a material risk of serious adverse psychological
impact on an under-18; and Category 1 service providers additionally for content harmful to adults.</span><span style="font-family: georgia; text-indent: -18pt;"> </span><span style="font-family: georgia; text-indent: -18pt;"> </span></p><p class="MsoListParagraphCxSpLast" style="margin-left: 18pt; mso-add-space: auto; mso-list: l2 level1 lfo1; text-indent: -18pt;"></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjhgh8eo6maeGTSI5fayB3eUuHc1mydlHZDJtZV3vpclRxeGlclFzfFgfCJE0fjVIJ2IzW9cW7rb7LrZtfBu87FGWstRbiksP8FJMsGA6lpKr9CFDWBBxq71caXhs3bLfIay-jpQPO9wHtv/s135/snip2.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="134" data-original-width="135" height="134" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjhgh8eo6maeGTSI5fayB3eUuHc1mydlHZDJtZV3vpclRxeGlclFzfFgfCJE0fjVIJ2IzW9cW7rb7LrZtfBu87FGWstRbiksP8FJMsGA6lpKr9CFDWBBxq71caXhs3bLfIay-jpQPO9wHtv/s0/snip2.png" width="135" /></a></div><br /><span style="font-family: georgia;"><br /></span><p></p>Cyberleaglehttp://www.blogger.com/profile/17507190182464072147noreply@blogger.com0tag:blogger.com,1999:blog-229721367671779922.post-84659316754318152242021-11-01T15:48:00.001+00:002021-11-01T15:48:20.690+00:00The draft Online Safety Bill: systemic or content-focused?<p><span style="font-family: georgia;">One of the more intriguing aspects of the <a href="https://www.gov.uk/government/publications/draft-online-safety-bill">draft Online Safety Bill</a> is the government’s insistence that the safety duties under the draft Bill
are not about individual items of content, but about having appropriate systems
and processes in place; and that this is protective of freedom of expression.</span></p><p class="MsoNormal"><span style="font-family: georgia;"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Thus in <a href="https://committees.parliament.uk/writtenevidence/38883/html/">written evidence</a> to the Joint Parliamentary
Committee scrutinising the draft Bill the DCMS said:<o:p></o:p></span></p>
<p class="MsoNormal" style="margin-left: 36.0pt;"><span style="line-height: 115%;"><span style="font-family: georgia;">“The regulatory framework set out in the draft Bill is
entirely centred on systems and processes, rather than individual pieces of
content, putting these at the heart of companies' responsibilities. <o:p></o:p></span></span></p>
<p class="MsoNormal" style="text-indent: 36.0pt;"><span style="line-height: 115%;"><span style="font-family: georgia;">…<o:p></o:p></span></span></p>
<p class="MsoNormal" style="margin-left: 36.0pt;"><span style="line-height: 115%;"><span style="font-family: georgia;">The focus on robust processes and systems rather than
individual pieces of content has a number of key advantages. The scale of
online content and the pace at which new user-generated content is uploaded
means that a focus on content would be likely to place a disproportionate
burden on companies, and lead to a greater risk of over-removal as companies seek
to comply with their duties. This could put freedom of expression at risk, as
companies would be incentivised to remove marginal content. The focus on
processes and systems protects freedom of expression, and additionally means
that the Bill’s framework will remain effective as new harms emerge. <o:p></o:p></span></span></p>
<p class="MsoNormal" style="margin-left: 36.0pt;"><span style="font-family: georgia;"><span style="line-height: 115%;">The regulator will be focused on oversight of the
effectiveness of companies’ systems and processes, including their content
moderation processes. The regulator will not make decisions on individual
pieces of content, and will not penalise companies where their moderation
processes are generally good, but inevitably not perfect.”</span> <span style="font-size: 10.0pt; line-height: 115%;"><o:p></o:p></span></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The government appears to be arguing that since a service
provider would not automatically be sanctioned for a single erroneous removal
decision, it would tend to err on the side of leaving marginal content up. Why such an incentive would operate
only in the direction of under-removal, when the same logic would apply to
individual decisions in either direction, is unclear. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Be that as it may, elsewhere the draft Bill hardwires a
bias towards over-removal into the illegal content safety duty: by setting the
threshold at which the duty bites at ‘reasonable grounds to believe’ that the
content is illegal, rather than actual illegality or even likelihood of
illegality. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The government’s broader claim is that centreing duties on
systems and processes results in a regulatory regime that is not focused on individual
pieces of content at all. This claim merits close scrutiny. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Safety duties, in terms of the steps required to fulfil
them, can be of three kinds:</span><span style="font-family: georgia;"> </span></p><p class="MsoNormal"></p><ul style="text-align: left;"><li><span style="font-family: georgia; text-indent: -18pt;"><i>Non-content</i>. Duties with no direct effect
on content at all, such as a duty to provide users with a reporting mechanism.</span></li><li><span style="font-family: georgia;"><i>Content-agnostic</i>. This is a duty that is
independent of the kind of content involved, but nevertheless affects users’
content. By its nature a duty that is unrelated to (say) the illegality or
harmfulness of content will tend to result in steps being taken (‘friction’
devices, for instance, or limits on reach) that would affect unobjectionable or
positively beneficial content just as they affect illegal or legal but harmful
content.</span></li><li><span style="font-family: georgia;"><i>Content-related</i>. These duties are framed
specifically by reference to certain kinds of content: in the draft Bill,
illegal, harmful to children and harmful to adults. Duties of this kind aim to
affect those kinds of content in various ways, but carry a risk of collateral
damage to other content.</span></li></ul><p></p>
<p class="MsoNormal"><span style="font-family: georgia;">In principle a content-related duty could encompass harm
caused either by the informational content itself, or by the manner in which a
message is conveyed. Messages with no informational content at all can cause
harm: repeated silent telephone calls can instil fear or, at least, constitute
a nuisance; flashing lights can provoke an epileptic seizure. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The government’s emphasis on systems and processes to some
extent echoes calls for a ‘systemic’ duty of care. To quote the Carnegie UK
Trust’s <a href="https://d1ssu070pg2v9i.cloudfront.net/pex/pex_carnegie2021/2021/10/06120715/Evidence-Joint-Committee.pdf">evidence to the Joint Scrutiny Committee</a>, arguing for a more systemic
approach: <o:p></o:p></span></p>
<p class="MsoListParagraphCxSpFirst"><o:p></o:p></p><blockquote><span style="font-family: georgia;">“To achieve the benefits of a systems and
processes driven approach the Government should revert to an overarching
general duty of care where risk assessment focuses on the hazards caused by the
operation of the platform rather than on types of content as a proxy for
harm.” </span></blockquote><p></p>
<p class="MsoListParagraphCxSpLast"><span style="font-family: georgia;">A systemic duty would certainly include the first two
categories of duty: non-content and content-agnostic. It seems inevitable that a systemic duty
would also encompass content-related duties. While steps taken pursuant to a
duty may range more broadly than a binary yes/no content removal decision, that
does not detract from the inevitable need to decide what (if any) steps to take
according to the kind of content involved.</span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Indeed it is notable how rapidly discussion of a systemic
duty of care tends to move on to categories of harmful content, such as hate
speech and harassment. Carnegie’s evidence, while criticising the draft Bill’s
duties for focusing too much on categories of content, simultaneously censures
it for not spelling out for the ‘content harmful to adults’ duty how “huge
volumes of misogyny, racism, antisemitism etc – that are not criminal but are
oppressive and harmful – will be addressed”. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Even a wholly systemic duty of care has, at some level and
at some point – unless everything done pursuant to the duty is to apply
indiscriminately to all kinds of content - to become focused on which kinds of
user content are and are not considered to be harmful by reason of their
informational content, and to what degree. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">To take one example, Carnegie discusses repeat
delivery of self-harm content due to personalisation systems. If repeat
delivery <i>per se</i> constitutes the risky activity, then inhibition of that
activity should be applied in the same way to all kinds of content. If repeat
delivery is to be inhibited only, or differently, for particular kinds of
content, then the duty additionally becomes focused on categories of content. There is no
escape from this dichotomy.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">It is possible to conceive of a systemic safety duty expressed
in such general terms that it would sweep up anything in the system that might
be considered capable of causing harm (albeit - unless limited to risk of
physical injury - it would still inevitably struggle, as does the draft Bill, with
the subjective nature of harms said to be caused by informational content). A
systemic duty would relate to systems and processes that for whatever reason
are to be treated as intrinsically risky. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The question that then arises is what activities are to be
regarded as inherently risky. It is one thing to argue that, for instance, some algorithmic systems
may create risks of various kinds. It is quite another to suggest that that is true of any kind
of U2U platform, even a simple discussion forum. If the underlying assumption of
a systemic duty of care is that providing a facility in which individuals can
speak to the world is an inherently risky activity, that (it might be thought)
upends the presumption in favour of speech embodied in the fundamental right of
freedom of expression. <o:p></o:p></span></p>
<p class="MsoNormal"><b><span style="font-family: georgia;">The draft Bill – content-related or not?<o:p></o:p></span></b></p>
<p class="MsoNormal"><span style="font-family: georgia;">To what extent are the draft Bill’s duties content-related,
and to what extent systemic?<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Most of the draft Bill’s duties are explicitly content-related. They mean to cover online user content that is illegal or harmful to adults or children. To the extent that, for instance, the effect of algorithms on the likelihood of encountering content has to be considered, that is in relation to those kinds of content.</span></p><p class="MsoNormal"><span style="font-family: georgia;">For content-related duties the draft Bill draws no obvious distinction
between informational and non-informational causes of harm. So risk of physical
injury as a result of reading anti-vax content is treated indistinguishably
from risk of an epileptic seizure as a result of seeing flashing images. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The most likely candidates in the draft Bill for
content-agnostic or non-content duties are Sections 9(2) and 10(2)(a). For
illegal content S.9(2) requires the service provider to “take proportionate
steps to mitigate and effectively manage the risks of harm to individuals”,
identified in the service provider’s most recent S.7(8) illegal content risk
assessment. S.10(2) contains a similar duty in relation to harm to children in
different age groups, based on the most recent S.7(9) children’s risk
assessment.</span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Although the S.7 risk assessments are about illegal content
and content harmful to children, neither of the 9(2) and 10(2)(a) safety duties
is expressly limited to harm arising from those kinds of content. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Possibly, those duties are intended to relate back to
Sections 7(8)(e) and 7(9)(e) respectively. Those require risk assessments of the
“different ways in which the service is used, and the impact that has on the
level of risk of harm that might be suffered” by individuals or children
respectively – again without expressly referring to the kinds of content that constitute
the subject-matter of Sections 7(8) and 7(9).
However, to deduce a pair of wholly content-agnostic duties in Sections
9(2) and 10(2)(a) would seem to require those S.7 risk assessment factors to be considered independently of their respective contexts.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Whatever may be the scope of S.9(2) and 10(2)(a), the vast majority
of the draft Bill’s safety duties are drafted expressly by reference to
in-scope illegal or legal but harmful content. Thus, for example, the
government notes at para [34] of its evidence:<o:p></o:p></span></p>
<p class="MsoNormal" style="margin-left: 36.0pt;"><span style="line-height: 115%;"><span style="font-family: georgia;">“User-to-user services will be required to operate their
services using proportionate systems and processes to <i>minimise the presence,
duration and spread of illegal content</i> and to <i>remove it swiftly once
they are aware of it</i>.” (emphasis added)<span style="font-size: 10pt;"><o:p></o:p></span></span></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">As would be expected, those required systems and processes
are framed by reference to a particular type of user content. The same is true
for duties that apply to legal content defined as harmful to adults or
children. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The Impact Assessment accompanying the draft Bill states:<o:p></o:p></span></p>
<p class="MsoNormal" style="margin-left: 36.0pt;"><span style="line-height: 115%;"><span style="font-family: georgia;">“…it is expected that undertaking additional content
moderation (through hiring additional content moderators or using automated
moderation) will represent the largest compliance cost faced by in-scope
businesses.” (Impact Assessment [166])<span style="font-size: 10pt;"><o:p></o:p></span></span></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">That compliance cost is estimated at £1.7 billion over 10
years. That does not suggest a regime that is not focused on content. <o:p></o:p></span></p>
<p class="MsoNormal"><b><span style="font-family: georgia;">Individual user content<o:p></o:p></span></b></p>
<p class="MsoNormal"><span style="font-family: georgia;">The contrast drawn by the government is between systems and
processes on the one hand, and “individual” pieces of content on the other.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The draft Bill defines harm as physical or psychological
harm. What could result in such harm? The answer, online, can only be individual user content: that which, </span><span style="font-family: georgia;">whether alone or in combination,</span><span style="font-family: georgia;"> singly or repeated, we say and see online. Various factors may influence, to differing extents, what
results in which user content being seen by whom: user choices such as joining
discussion forums and channels, choosing topics, following each other, rating
each other’s posts and so on, or platform-operated recommendation and promotion
feeds. But none of that detracts from the fact that it is what is posted – items of user
content – that results in any impact.</span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The decisions that service providers would have to make –
whether automated, manual or a combination of both – when attempting to
implement content-related safety duties, inevitably concern individual items of
user content. The fact that those decisions may be taken at scale, or are the
result of implementing systems and processes, does not change that. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">For every item of user content putatively subject to a
filtering, take-down or other kind of decision, the question for a service
provider seeking to discharge its safety duties is always <a href="https://drive.google.com/file/d/1e4vSClZWin0wyG6PK68lzH7DtvjVfuZv/view">what (if anything)
should be done with <i>this</i> item of content in <i>this</i> context</a>? That is
true regardless of whether those decisions are taken for one item of content, a
thousand, or a million; and regardless of whether, when considering a service
provider’s regulatory compliance, Ofcom is focused on evaluating the adequacy
of its systems and processes rather than with punishing service providers for
individual content decision failures. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">A platform duty of care has been likened to an obligation to
prevent risk of injury from a protruding nail in a floorboard. <a href="https://www.cyberleagle.com/2019/06/speech-is-not-tripping-hazard-response.html">The analogy is flawed</a>, but even taking that analogy at
face value the draft Bill casts service providers in the role of hammer, not
nail. The dangerous nail is users’ speech. Service providers are the tool
chosen to hammer it into place. Ofcom directs the use of the tool. Whether an
individual strike of the hammer may or may not attract regulatory sanction is a
matter of little consequence to the nail.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Even if Ofcom would not be involved in making individual
content decisions, it is difficult to see how it could avoid at some point
evaluating individual items of content. Thus the provisions for use of technology
notices require the “prevalence” of CSEA and/or terrorism content to be
assessed before serving a notice. That inevitably requires Ofcom to assess whether
material present on the service does or does not fall within those defined
categories of illegality. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">More broadly, it is difficult to see how Ofcom could
evaluate for compliance purposes the proportionality and effectiveness of
filtering, monitoring, takedown and other systems and processes without
considering whether the user content affected does or does not qualify as
illegal or harmful content. That would again require a concrete assessment of
at least some actual items of user content. <o:p></o:p></span></p>
<span style="line-height: 115%;"><span style="font-family: georgia;">It is not immediately
obvious why the government has set so much store by the claimed systemic nature
of the safety duties. Perhaps it thinks that by seeking to distance Ofcom from
individual content decisions it can avoid accusations of state censorship. If
so, that ignores the fact that service providers, via their safety duties, are proxies
for the regulator. The effect of the legislation on individual items of user
content is no less concrete because service providers are required to make
decisions under the supervision of Ofcom, rather than if Ofcom were wielding
the blue pencil, the muffler or the content warning generator itself.</span></span><span style="font-family: Georgia, serif; line-height: 115%;"> </span><div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjhgh8eo6maeGTSI5fayB3eUuHc1mydlHZDJtZV3vpclRxeGlclFzfFgfCJE0fjVIJ2IzW9cW7rb7LrZtfBu87FGWstRbiksP8FJMsGA6lpKr9CFDWBBxq71caXhs3bLfIay-jpQPO9wHtv/s135/snip2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="134" data-original-width="135" height="134" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjhgh8eo6maeGTSI5fayB3eUuHc1mydlHZDJtZV3vpclRxeGlclFzfFgfCJE0fjVIJ2IzW9cW7rb7LrZtfBu87FGWstRbiksP8FJMsGA6lpKr9CFDWBBxq71caXhs3bLfIay-jpQPO9wHtv/s0/snip2.png" width="135" /></a></div><br /><span style="font-family: Georgia, serif; line-height: 115%;"><br /></span></div>Cyberleaglehttp://www.blogger.com/profile/17507190182464072147noreply@blogger.com0tag:blogger.com,1999:blog-229721367671779922.post-47132053850754585672021-06-28T18:16:00.002+01:002021-06-30T10:32:16.316+01:00On the trail of the Person of Ordinary Sensibilities<p><span style="font-family: georgia;">One of the more perplexing provisions of the <a href="https://www.cyberleagle.com/2021/05/harm-version-30-draft-online-safety-bill.html">draft Online Safety Bill</a> is its multi-level definition of legal but harmful content (lawful
but awful content, to give it its colloquial name).</span></p><p class="MsoNormal"><span style="font-family: georgia;"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The proposal that service providers’ safety duties under the
Bill should apply to such content is in itself controversial, when users
themselves – who are in the same position as authors of books - owe no duty of
care in respect of the safety of their readers. <a href="https://www.indexoncensorship.org/2021/06/governments-online-safety-bill-will-be-catastrophic-for-ordinary-peoples-freedom-of-speech-says-david-davis-mp/" target="_blank">Some campaigners</a> have argued
that the proposed service provider duties should be limited to illegal content
at most. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">But given that legal content is included, how has the
government set about drawing a line between innocuous and harmful?<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The draft Bill contains twin definitions: ‘content harmful
to adults’ and ‘content harmful to children’. Since they are almost identical, I
shall refer just to harmful content. Both definitions make use of a legal fiction:
the adult or child “of ordinary sensibilities”. </span></p><p class="MsoNormal"><span style="font-family: georgia;">Baroness Bull, in the House of
Lords, foresaw “endless court time being devoted to determining whether my
sensibilities are more ordinary than the next person's". <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Why does the draft Bill use this term? What does it mean? <o:p></o:p></span></p>
<p class="MsoNormal"><b><span style="font-family: georgia;">Why the Person of Ordinary Sensibilities?<o:p></o:p></span></b></p>
<p class="MsoNormal"><span style="font-family: georgia;">The first question is easier to answer than the second. The
problem with trying to define harmful content is that speech is subjectively
perceived and experienced. Different people respond to reading, hearing or viewing
the same content in different ways. They differ as to whether they find content
offensive, shocking or disturbing, they differ in their emotional response
(enjoyment, distress, anger, fear, anxiety), they differ as to whether they
change their views after reading, hearing or seeing it, and they differ in
terms of any action that they may or may not choose to take after reading,
hearing or seeing it. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Legislation based purely on subjectively perceived harm is thus
liable to adopt, by default, the standard of the most easily shocked, upset or
offended. Translated into service provider obligations, when assessing risk of
harm on its service the service provider might have to assume a low threshold and the most sensitive user.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">To counter this, an available tool is to restrict the kinds of harm that are in scope, so that (for instance) mere annoyance does not count.
The draft Bill stipulates ‘physical or psychological harm’. However,
psychological harm still contains a significant element of subjectivity – it is
not restricted to a medical condition – and in any event there remains the issue
of people’s differing susceptibilities to psychological impact. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">An approach to addressing variable susceptibility is to
posit a notional reader defined in objective – or at least pseudo-objective –
terms (discussed in detail in section 5 of my <a href="https://www.cyberleagle.com/2019/06/speech-is-not-tripping-hazard-response.html">submission</a> to the Online Harms White Paper consultation). The law contains many examples of such legally fictional characters, from
the Man on the Clapham Omnibus to the Right-Thinking Member of Society. They
are intended to iron out extremes – but in order to achieve that they still need
to be clothed in attributes selected by the statute, the court or both. Goddard
L.J. once observed: <o:p></o:p></span></p>
<p class="MsoNormal" style="margin-left: 36pt;"><span style="font-family: georgia;">“Of course, different minds have
different ideas as to what is moderate, and seeking for a mean, a normal, or an
average where there really is no guide is very like Lord Bowen’s illustration
of a blind man looking for a black hat in a dark room”. (<i>Mills v Stanway
Coaches Ltd</i> [1940] 2 K.B. 334)<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Such legally fictional characters are normally deployed as part of a process of determining liability after the event, based on ascertained facts
involving known individuals, tested and argued through the adversarial court
process. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">By contrast, the service provider under the draft Online
Harms Bill would be expected to engage in a process of predictive policing,
anticipating the kinds of content that, if they were to appear on the service,
the service provider would have reasonable grounds to believe satisfied the
definition of harm. It would have to consider the concomitant risk posed by them; and (most probably) write an algorithm to address them.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The task that the draft Bill assigns to service providers is thus
to predict, seek out, detect and then either deal with (for adult harmful
content), or mitigate, manage or prevent (for various kinds of child harmful
content), any number of different hypothetical black or grey hats that might or
might not be present in the dark room.<span style="mso-spacerun: yes;"> </span><o:p></o:p></span></p>
<p class="MsoNormal"><b><span style="font-family: georgia;">Level 1 - The Person of Ordinary Sensibilities <o:p></o:p></span></b></p>
<p class="MsoNormal"><span style="font-family: georgia;">The legally fictional character chosen to bring some objectivity to the
draft Online Harms Bill is the Person of Ordinary Sensibilities. So at Level 1 of
the multi-level definition, S.46(3) defines content harmful to an adult as
content the nature of which is such that “<i>there is a material risk of the
content having, or indirectly having, a significant adverse physical or
psychological impact on an adult of ordinary sensibilities</i>”. I will descend
into the lower levels of the definition presently. <o:p></o:p></span></p>
<p class="MsoNormal"><i><span style="font-family: georgia;">Antecedents<o:p></o:p></span></i></p>
<p class="MsoNormal"><span style="font-family: georgia;">A version of the Person of Ordinary Sensibilities is found
in existing law. The DCMS Minister for Digital and Culture Caroline Dinenage, in her <a href="https://committees.parliament.uk/publications/6336/documents/69560/default/">letter of 16 June 2021</a> to the Lords
Communications and Digital Committee, said: “<i>This concept is already
well-established in law, for example in case law concerning the tort of misuse
of private information.</i>” This refers to the judgment of the House
of Lords in the Naomi Campbell case. However, there are significant differences
between misuse of private information and infliction of psychological harm. Moreover,
when we delve into the antecedents of the Person of Ordinary Sensibilities we
find that a mutation has occurred. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The main difference from privacy is that the focus of infliction
of psychological harm is on the reader of the material: the person on whom the
harm is inflicted. In contrast, in the tort of misuse of private information the hypothetical
Reasonable Person of Ordinary Sensibilities refers to the person whose privacy
is said to have been invaded, not someone who reads the disclosed information. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The privacy test is therefore not about impact on someone
who receives information. It is whether the Reasonable Person of Ordinary Sensibilities,
put in the position of the person whose private information is said to have been
misused, would find the disclosure offensive or objectionable. What view would that
hypothetical person, put in the position of the claimant and exercising their
rational faculties, take of such disclosure about their own private life? <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Caution is therefore necessary in transposing the
Reasonable Person of Ordinary Sensibilities from misuse of private information
to psychological impact on the reader. <o:p></o:p></span></p>
<p class="MsoNormal"><i><span style="font-family: georgia;">Must the Person of Ordinary Sensibilities be Reasonable?<o:p></o:p></span></i></p>
<p class="MsoNormal"><span style="font-family: georgia;">Most intriguingly, somewhere on the journey from <i>Campbell
v MGN</i> to the draft Online Safety Bill, ‘Reasonable’ has been jettisoned. </span></p><p class="MsoNormal"><span style="font-family: georgia;">This can be no accident since ‘reasonable’ is an integral part of the <i>Campbell</i>
formulation, and can be traced back in turn to a 1960 US paper on Privacy by
Dean William Prosser. Why would anyone take a conscious decision to strike out
‘Reasonable’? Why include the Unreasonable Person of Ordinary Sensibilities? I
have given some thought to this and - on the assumption that Reasonable has indeed been omitted for a reason - I have a possible answer. Whether it is the
actual explanation I do not know. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">When considering whether reasonableness is relevant, recall
that for inflicted harm - unlike for privacy - we are considering the impact of
the information on its recipient. If you jab someone in the arm with a needle, any
person of ordinary sensibilities will react autonomically in the same way (if
not necessarily to the same degree): with pain and blood. There is no room for any
additional concept of reasonableness, since the reaction of the person to whom
it is done is not a matter of conscious decision. </span></p><p class="MsoNormal"><span style="font-family: georgia;">Omitting “reasonable” in the
draft Bill’s formulation suggests either that the drafters of the Bill have
assumed the same to be true of imparting information; or if not, that as far as
the draft Bill is concerned the reasonableness of the reader’s conscious reaction is irrelevant.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">We can conceive of a circumstance in which reaction to
information is not a matter of conscious decision. If someone suffering from epilepsy
were to encounter online content containing flashing lights, a physical
reaction might be triggered. It would appear likely to fit the description of 'significant adverse physical impact'. That reaction is not in any sense a matter of voluntary
choice, but a question of someone’s sensitivity to flashing lights. As with the
needle in the arm, reasonableness of the reaction is simply an irrelevant
concept of no application. The only relevant question is whether the
sensibilities of an epilepsy sufferer should be considered to be ordinary. (More
of that when we consider the Level 2 definition.)<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">That, it seems, is how the draft Online Harms Bill
approaches the matter of reading online content, not just for physical harm but
also psychological harm. It would be consistent with the phraseology “content having a
significant… impact”. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">One possible interpretation of the draft Bill is that only
information causing an autonomic adverse psychological impact is in scope. Any
kind of impact that engages the rational faculties of the reader, and to which
the reasonableness of the reader’s chosen reaction is therefore a conceptual
possibility, would be out of scope. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">That seems very unlikely to be the government’s intention, first
because the distinction (if there is one) verges on deep psychological
and even philosophical questions about what is and is not a conscious reaction.<span style="mso-spacerun: yes;"> </span>Does a Person of Ordinary Sensibilities respond
automatically or make a choice in how they react emotionally to encountering,
say, prejudice of various kinds? What if the question of whether the particular
speech in question amounts to prejudice in the first place is contested and debated, each
side regarding the other as prejudiced? <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Second, such a narrow interpretation would appear to exclude
from scope informational subject matter (such as misinformation) that the
government plainly intends to include and is referred to elsewhere in the draft
Bill.</span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The second (and I would say probable) interpretation is that the formulation includes
situations in which the reader has a degree of conscious choice about how to
react, but nevertheless the reasonableness of the reaction is to be treated as irrelevant.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">There is a certain logic to that when we consider
misinformation. Any potentially harmful impact of misinformation or
disinformation necessarily depends on the reader believing what they are told. Deciding
what to believe involves an exercise of the critical faculties. Capturing all misinformation
within the definition of harmful content depends upon excluding reasonableness
from the equation and including the Credulous Person of Ordinary Sensibilities
within the notional reader. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">To take an extreme example of the distinction between
sensibilities and reasonableness, consider a post predicting that the world
will end tomorrow. Would a person of ordinary sensibilities experience
significant adverse psychological impact if they were to believe it? It is hard
to think otherwise. At any rate there would surely be reasonable grounds for a
service provider to believe that that was a material risk. Would a reasonable
and well-informed person believe it? No. If reasonableness of the belief is ruled
out of consideration, the Credulous Person of Ordinary Sensibilities is within
scope, the end-of-the-world post falls within the definition of harmful content
and is within the service provider’s safety duty. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Conversely, if reasonableness is a relevant attribute of the
Person of Ordinary Sensibilities, then the more outlandish the misinformation, the less likely it would fall within scope. The service provider – in addition to
all the other fiendish judgements that it is required to make - would have to
distinguish between what misinformation it is reasonable and unreasonable to
believe.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">This is not some esoteric academic point. In the USA claims
for negligent infliction of emotional distress are permitted in some states. The
New Jersey Supreme Court in <i>Williamson v Waldman</i> limited recovery to “the
fears experienced by a reasonable and well-informed person.” This was a case based
on fear of contracting AIDS as a result of having been pricked by a discarded
medical lancet while cleaning a trash can. The court observed:<o:p></o:p></span></p>
<p class="MsoNormal" style="margin-left: 36pt;"><span style="font-family: georgia;">“Therefore, as a matter of sound
public policy, the standard of proximate cause should require as an element of
the test of causation a level of knowledge of the causes, transmission and
risks of AIDS. Such an enhanced standard will serve to overcome and discourage
ignorance about the disease and its resultant social ills. Thus, the
reasonableness standard should be enhanced by the imputation to the victim of
emotional distress based on the fear of contracting AIDS of that level of
knowledge of the disease that is then-current, accurate, and generally
available to the public.”<o:p></o:p></span></p>
<p class="MsoNormal"><i><span style="font-family: georgia;">What is a significant adverse psychological impact?<o:p></o:p></span></i></p>
<p class="MsoNormal"><span style="font-family: georgia;">The range of possible emotional reactions to a given item of
content may give rise to difficult questions.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Does our notional Person of Ordinary Sensibilities become
angry, anxious, fearful or distressed when they read certain content? Is anger
an adverse psychological impact? Or do only the other reactions, if they are significant,
qualify as adverse? Does the service provider have to gauge, hypothetically,
whether our fictional legal character would be angered or distressed by reading particular
kinds of content?<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Is the fact that (say) serious distress is one possible
reaction of our notional Person of Ordinary Sensibilities enough to satisfy the definition and trigger
the service provider’s safety duties? Does the service provider have to
consider whether, the more highly charged the subject matter of a debate, it
is more likely that someone will claim to be traumatised by the repugnant views
of their opponent?<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Physical and psychological harm are not supposed to be
about taking offence or objection. On the other hand the government has said
that psychological harm is not intended to be limited to medically recognised
conditions. The examples of kinds of significant negative effect on the mental
state of an individual that they give in the Explanatory Notes are:<o:p></o:p></span></p>
<p class="MsoNormal" style="margin-left: 36pt;"><span style="font-family: georgia;">“<i>feelings such as serious
anxiety and fear; longer-term conditions such as depression and stress; and
medically recognised mental illnesses, both short-term and permanent.</i>”<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">What is ‘significant’ may be a matter for debate. Does it mean
serious (as the Explanatory Note suggests), or merely that it is not trivial? It
is noteworthy that some US caselaw has sought to inject a standard of reasonableness
into the seriousness of the emotional distress experienced: “a level of
distress such that no reasonable person could be expected to endure it without undergoing
unreasonable suffering”. (<i>Williams v Tennessee National Corp</i>.)<o:p></o:p></span></p>
<p class="MsoNormal"><b><span style="font-family: georgia;">Level 2 - characteristics and membership of groups<o:p></o:p></span></b></p>
<p class="MsoNormal"><span style="font-family: georgia;">Having started by saying in S.46(3) that the Person of
Ordinary Sensibilities has only ordinary sensibilities, the draft Bill goes on
to qualify that.<span style="mso-spacerun: yes;"> </span>Section 46(4)
provides that:<o:p></o:p></span></p>
<p class="MsoNormal" style="margin-left: 36pt;"><span style="font-family: georgia;">“… in the case of content which
may reasonably be assumed to particularly affect people with a certain
characteristic (or combination of characteristics), or to particularly affect a
certain group of people, the provider is to assume that [the Person of Ordinary
Sensibilities] possesses that characteristic (or combination of
characteristics), or is a member of that group (as the case may be).”<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">To take our previous example of a sufferer from epilepsy, if
their sensibilities are not Ordinary under S.46(3), they would appear to be so
under S.46(4). Epilepsy seems apt to count at least as a characteristic, in
which case the service provider should consider whether there is a material risk
of user content with flashing lights affecting sufferers from epilepsy.</span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Curiously, the DCMS Minister’s letter to the Lords Committee
said that “use of the term “ordinary sensibilities” is intended to make clear
that the test of whether legal content is harmful does not include content that
only people with an unusual sensitivity (such as a phobia) would be harmed by.” Perhaps the Minister was intending to refer only to S.46(3). If she was also including S.46(4), it is not clear to me why (say) epilepsy would not be within scope of that section.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The issues under S.46(4) become more complex when previous
experience is brought into the equation. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The Lords Communications and Digital Committee asked the
DCMS whether being a survivor of sexual abuse would count as a relevant
characteristic. The Minister’s first comment was that it would expect Ofcom’s
codes of practice and any supplementary guidance to assist service providers to
fulfil their obligations in relation to any such points – which is not really to
the point, since the question was about the meaning of the legislation (by which Ofcom would be bound). </span></p><p class="MsoNormal"><span style="font-family: georgia;">However, the Minister went on to suggest that experiences that can have a
profound effect on victims should be taken into account by service providers
when assessing the risk of harm posed by online content to individuals. The
person of ordinary sensibilities would include someone who had had that
experience. The same would apply in other cases where content could potentially
give rise to a material risk of significant adverse physical or psychological
impact on survivors of an experience.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">One effect of this provision appears to be that if different
survivors of an experience might react differently to certain content – some,
perhaps, finding discussion of a difficult subject helpful and some suffering
anxiety or worse – the service provider should assume the adverse reaction.<o:p></o:p></span></p>
<p class="MsoNormal"><b><span style="font-family: georgia;">Level 3 – indirect impact on a Person of Ordinary
Sensibilities<o:p></o:p></span></b></p>
<p class="MsoNormal"><span style="font-family: georgia;">Section 46(7) defines indirect impact on a Person of
Ordinary Sensibilities. S.46(7)(a) addresses the risk of content causing an individual
to do or say things to a targeted adult that would have an adverse physical or
psychological impact on such an adult.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">In this context it is clear that the individual concerned is
making a conscious choice about how to respond to content. However, the section
speaks in terms of "content causing an individual to do or say things" to
another adult. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The unstated premise appears to be that an individual makes no conscious decision – that reading content causes the individual to act in a certain way. However, we read and view and make decisions. We may do something or
nothing. If we do something, we choose what to do.<span style="mso-spacerun: yes;"> </span>Content does not cause a single, involuntary,
Pavlovian response.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The DCMS Minister, in her letter to the Lords
Communications Committee, suggested that in this instance reasonableness of the
interposed individual’s response is in fact a limiting factor: <o:p></o:p></span></p>
<p class="MsoNormal" style="margin-left: 36pt;"><span style="font-family: georgia;">“The service provider would not
have the necessary reasonable grounds to believe that there was such a risk if
the content could only have such an effect by <i>triggering an unexpected
response in an unreasonable person (for example innocuous content leading to
risky or violent behaviour)</i>. (emphasis added)<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">There is a tension between referring to a response as being 'triggered', while simultaneously considering the reasonableness of the response. </span></p><p class="MsoNormal"><span style="font-family: georgia;">A provision of this kind might include a reference to
whether it was reasonably foreseeable that an individual would decide to take a
certain kind of action as a result of reading certain kinds of content, and
whether that action was reasonable. S.46(7) is silent on that. The government’s view could perhaps be that the reasonableness limitation is implicit in causation. <o:p></o:p></span></p>
<p class="MsoNormal"><b><span style="font-family: georgia;">Level 4 – the Ultimate Demise of the Person of Ordinary
Sensibilities<o:p></o:p></span></b></p>
<p class="MsoNormal"><span style="font-family: georgia;">Section 46(6) contains a further refinement of the Person of
Ordinary Sensibilities, dealing with the situation where there is a known
person at whom content is directed, or who is the subject of it. At this point the </span><span style="font-family: georgia;">Person of Ordinary Sensibilities</span><span style="font-family: georgia;"> is abandoned and replaced with the
person’s own sensibilities.</span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Thus where the provider has knowledge,
relevant to the content, about a particular person at whom content is directed, the risk of significant physical or psychological impact on that person is to be considered, taking into account any of the following known to or
inferred by the provider—<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">(a) that person’s characteristics;<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">(b) that person’s membership of a certain group of people.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The effect of this section appears to be that someone who
claims to be significantly and adversely psychologically impacted by particular
content can put the service provider on notice. If the service provider has
reasonable grounds to believe that a material risk of such impact exists, then its safety duty focuses on that person and that content. We can imagine that a </span><span style="font-family: georgia;">service provider would be reluctant to deny the risk, once put on notice of the claim. As such, this </span><span style="font-family: georgia;">provision appears to embody veto possibilities.</span></p>
<p class="MsoNormal"><b><span style="font-family: georgia;">Conclusion<o:p></o:p></span></b></p>
<p class="MsoNormal"><span style="font-family: georgia;">The draft Bill's attempt to convert subjective perception of
content into an objective standard illustrates just how difficult it is to
apply concepts of injury and harm to speech. The cascading levels of definition,
ending up with a provision that appears to give precedence to an individual’s subjective claim
to significant adverse psychological impact, will bear close scrutiny – not only in their own right, but as to how a
service provider is meant to go about complying with them. </span><o:p></o:p></p><p class="MsoNormal"><span style="font-family: georgia;">[30 June 2021. Inserted 'is to be treated as', for clarity. Deleted erroneous 'not'.]</span></p><p class="MsoNormal"></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjhgh8eo6maeGTSI5fayB3eUuHc1mydlHZDJtZV3vpclRxeGlclFzfFgfCJE0fjVIJ2IzW9cW7rb7LrZtfBu87FGWstRbiksP8FJMsGA6lpKr9CFDWBBxq71caXhs3bLfIay-jpQPO9wHtv/s135/snip2.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="134" data-original-width="135" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjhgh8eo6maeGTSI5fayB3eUuHc1mydlHZDJtZV3vpclRxeGlclFzfFgfCJE0fjVIJ2IzW9cW7rb7LrZtfBu87FGWstRbiksP8FJMsGA6lpKr9CFDWBBxq71caXhs3bLfIay-jpQPO9wHtv/s0/snip2.png" /></a></div><br /><span style="font-family: georgia;"><br /></span><p></p>Cyberleaglehttp://www.blogger.com/profile/17507190182464072147noreply@blogger.com0tag:blogger.com,1999:blog-229721367671779922.post-64693428495413543432021-06-22T09:14:00.002+01:002021-06-22T14:03:41.267+01:00Speech vs. Speech<p><span style="font-family: georgia;">Can something that I write in this blog restrict someone
else’s freedom of expression?</span></p><p class="MsoNormal"><span style="font-family: georgia;"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">According to the UK government, yes. In its Full Response to
the Online Harms White Paper the government suggested that under the proposed legislation
user redress mechanisms to be provided by platforms would enable users to
“challenge content that unduly restricts their freedom of expression”. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">To anyone brought up on the traditional notion that a
fundamental right of freedom of expression exists in order to limit the uniquely
coercive power of the state, the proposition that one individual is capable of
restricting another individual’s freedom of expression (let alone that they can
do so merely by writing and publishing) is a contradiction in terms. Yet
presumably the government meant something by it.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Happily, that phrase did not make it into the draft Online Safety
Bill. Perhaps someone thought better of it. Nevertheless, it amply illustrates
the fog of confusion that arises once we embark on a discussion of freedom of
expression. We elide freedom of expression as a desirable value and freedom of
expression as a fundamental right. We confuse substantive laws with the surrounding
metalaw of fundamental rights. We conflate shields and swords. We employ the
same terms to describe protection from state coercion and using state coercion
as an instrument. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">As a result, discussions of freedom of expression tend to
resemble convoys of ships passing in the night. If, by the right of freedom of
expression, Alice means that she should be able to speak without fear of being
visited with state coercion; Bob means a space in which the state guarantees,
by threat of coercion to the owner of the space, that he can speak; Carol contends
that in such a space she cannot enjoy a fully realised right of freedom of
expression unless the state forcibly excludes Dan’s repugnant views; and Ted
says that irrespective of the state, Alice and Bob and Carol and Dan all
directly engage each other’s fundamental right of freedom of expression when
they speak to each other; then not only will there be little commonality of
approach amongst the four, but the fact that they are talking about fundamentally
different kinds of rights is liable to be buried beneath the single term,
freedom of expression. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">If Grace adds that since we should not tolerate those who are
intolerant of others’ views the state should – under the banner of upholding
freedom of expression – act against intolerant speech, the circle of confusion
is complete.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">It is difficult to make sense of appeals to freedom of expression
as a fundamental right without appreciating the range of different usages and
their, to some degree, contradictory underpinnings. When the same label is used to
describe a right to be protected against coercive state action, a right whose
existence is predicated on coercive state action, and everything in between, the
prospects of conducting a debate on common ground are not good. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Prompted by the existence of the Lords Communications and
Digital Committee <a href="https://committees.parliament.uk/work/745/freedom-of-expression-online/" target="_blank">Inquiry into Freedom of Expression Online</a>, this piece aims –
without any great expectation of success - to dispel some of the fog. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;"><b>Freedom of expression as a value</b> Freedom of
expression as a value holds that more scope for expression is generally preferable
to less. That is a reason for resisting undue restrictions imposed by the state.
<span style="mso-spacerun: yes;"> </span>It is also a criterion by which the
policies of institutions, both private and state, may be evaluated and praised
or criticised. Although freedom of expression as a fundamental right is not the
same thing as freedom of expression as a value, the existence of the fundamental
right reflects the high value that we place on freedom of expression.<b><o:p></o:p></b></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">However, we also value freedom of choice. An institution that
chooses to place restrictions on the speech that it permits within its environs
is not automatically to be deprecated. We do not necessarily think less of a meeting
venue because, choosing to avoid controversy, it declines to follow the approach of Conway Hall. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">That said, maximising the scope for freedom of expression may
be thought to be especially desirable in some contexts. Universities,
holding themselves out as dedicated to free and fearless academic inquiry and
debate, are one example. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">If an institution’s policy on speech is criticised as overly
restrictive, the implication is that it has had insufficient regard to freedom
of expression as a value. Whether that also engages a right of freedom of
expression may depend on the version of the right adopted - Alice’s, Bob’s,
Carol’s, Ted’s or some other – and, at least for Alice’s version, whether the
institution in question forms part of the state.<span style="mso-spacerun: yes;"> Grace will consider that unfurling the banner of freedom of expression is its own justification for the state to employ coercion.</span><o:p></o:p></span></p>
<p class="MsoNormal"><b><span style="font-family: georgia;">Alice’s shield against abuse of state power<o:p></o:p></span></b></p>
<p class="MsoNormal"><span style="font-family: georgia;">The classic formulation of freedom of expression as a
fundamental right is Alice’s version: a protective shield against abuse of the
uniquely coercive power of the state. That is most plainly rendered in the US
First Amendment:<span style="mso-spacerun: yes;"> </span>“Congress shall make no
law…”. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Although the European Convention on Human Rights is more
equivocal, its primary concern is also said to be coercion by the state. ECtHR
caselaw refers to the “primarily negative undertaking of a State to abstain
from interference in the rights guaranteed by the Convention”. To comply with
the Convention, an interference with freedom of expression by the state must be
prescribed by law and satisfy conditions of necessity and proportionality.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">From this perspective the proposition that my writing can
restrict your (right of) freedom of expression is startling. My writing is not state
action. As such, the proposition is orthogonal to Alice’s notion of a
fundamental right. It falls at the first hurdle. We never reach the question of
whether – and if so how – speech might of itself be capable of restricting someone’s
differently conceptualised right of freedom of expression.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;"><b>Bob and Carol’s sword of horizontality </b>However, some concepts
of fundamental rights are broader than a shield against state coercion. One such
is the notion of a positive state obligation. That may require the state to
unsheath its sword and take positive steps to ‘secure’ an individual’s
fundamental right. In its simplest, bilateral, form a state can be required to take
positive steps to protect an individual’s right as against the state itself.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">In Europe the theory of positive state obligations has reached
its full flowering with the theory of horizontal fundamental rights. This is borrowed
from the German constitutional law concept of <i>drittwirkung</i> and has increasingly
been adopted by the European Court of Human Rights. In this version the state is
obligated not merely to refrain from interfering unjustifiably with someone’s Convention
rights, but may positively be obliged to wield the sword of coercive power in
order to secure a Convention right as between one private individual and
another.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The Strasbourg Court frequently recites that the obligations
on the State are not necessarily limited to abstaining from interference with Convention
rights, but may “require positive measures of protection, even in the sphere of
relations between individuals” (see e.g. <i>Palomo Sanches v Spain</i> 12
September 2011)<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The ECHR evolution from shield to sword is well summarised
by Monika Florczack-Wątor:<o:p></o:p></span></p>
<p class="MsoNormal" style="margin-left: 36pt;"><span style="font-family: georgia;">“After World War II, the European
Convention on Human Rights was prepared in the belief that the greatest threats
to an individual resulted from actions of the State and its authorities. …
Several decades of development of the Council of Europe, whose main aim has
been to promote the principles of democracy and respect for human rights, have
strengthened trust in the State Parties to the Convention associated with this
organization. They ceased to be perceived as the main threat to human rights,
and the bar started to rise in terms of what was expected of them. With time
came the recognition that it was not the State but private parties that posed
the biggest threat to individuals’ rights and duties enshrined in the
Convention. … Thus, as Andrew Clapham observes, the European Convention on
Human Rights replaced the idea of protecting the individual against State
measures with the idea of protecting the individual through State measures.” (T<i>he
Role of the European Court of Human Rights in Promoting Horizontal Positive
Obligations of the State</i>. International and Comparative Law Review, 2017,
vol. 17, no. 2, pp. 39–53.)<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The best known example of the Strasbourg Court’s invocation of
horizontality is its interpretation of the Article 8 privacy right. The <i>von
Hannover</i> decision led to the UK being obliged to develop a new tort of
misuse of private information. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Horizontality has been applied – but so far not often - to the
Article 10 freedom of expression right. For example, in <i>Herbai v Hungary </i>the
Strasbourg Court held that the state had a positive obligation under Article 10
to secure an employee’s right of freedom of expression as against their private
sector employer. That right was violated where the state provided no redress
when the employer dismissed the employee on account of material that the
employee posted on a website. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">In principle, horizontality could be deployed to support Bob
or Carol’s position. How though, to decide the outcome? Since wielding the
sword of horizontality tends to require the state to interfere with the
fundamental rights of another person, the human rights court ends up ‘balancing’
the conflicting fundamental rights of the persons involved (or at least the
individual’s interests against those of ‘the community as a whole’) in order to
decide which should prevail. That exercise, however, is more akin to conducting
and resolving a policy debate than deciding a legal question. <span style="mso-spacerun: yes;"> </span><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">‘Balancing’ is a process that the Strasbourg court often
undertakes in freedom of expression cases for a different reason. ECHR Article
10.2 permits an interference with freedom of expression by the state to be
justified on the grounds of protection 0f the reputation or rights of others.
‘Rights’, in this context, necessarily has a broader meaning than a right to be
protected from state interference. It implies something that the State is
entitled to use its coercive power to protect from interference by other
persons, even if it is not obliged to do so. Horizontality goes a step further
by introducing an obligation on a State to secure such a right.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">It is a whole topic in itself how human rights courts go
about deciding whether to invoke horizontality in a particular case; and
whether when they do so they supplant the role of the legislature by creating
substantive law, rather than limiting themselves to the metalaw role of determining
whether laws and other measures adopted by states have overstepped civilised
boundaries.<o:p></o:p></span></p>
<p class="MsoNormal"><b><span style="font-family: georgia;">Ted’s thicket of competing rights<o:p></o:p></span></b></p>
<p class="MsoNormal"><span style="font-family: georgia;">Once we have ventured into the territory of horizontality
and balancing of conflicting rights, it is but a short step to think of
fundamental rights in Ted’s terms: enjoyed by individuals as against each
other. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">As a matter of enforceable rights, however, Ted has taken a
step too far. Although, where horizontality is invoked, the court in effect decides
where to draw the line between the rights of two private persons, the exercise
is still conducted via the medium of the state. Judgments of the Strasbourg
court are addressed to Contracting States. They stipulate what domestic laws
they must not have or (in the case of positive obligations and horizontality)
must have. Strasbourg decisions do not create directly assertable and enforceable
rights as between one individual and another. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Nevertheless, Ted’s perspective is almost inevitably adopted
as shorthand. Fundamental rights are universally discussed in horizontal terms.
As their primary function of protection against the state has assumed
comparatively less prominence, fundamental rights have come to resemble a
thicket of competing rights, each one demanding that the balance with other
conflicting rights be resolved in its favour and secured by the sword of state
action. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">With each step away from Alice’s basic shield against the
excesses of state power towards Ted’s thicket of horizontal rights, fundamental
rights become ever more intricately woven into the fabric of society – yet,
paradoxically, woven with thinner thread as the content of the various rights
asserted becomes ever more contested, subjective and conflicting. Appeals to
fundamental rights increasingly come to resemble little more than policy advocacy
clothed in the language of rights.<o:p></o:p></span></p>
<p class="MsoNormal"><b><span style="font-family: georgia;">Speech as a restriction on freedom of expression<o:p></o:p></span></b></p>
<p class="MsoNormal"><span style="font-family: georgia;">Returning to the government’s suggestion that users could
“challenge content that unduly restricts their freedom of expression”: if it is
conceptually possible for a private actor to restrict someone else’s fundamental
freedom of expression right, could writing a blog or a social media post
qualify? In other words, can speech itself restrict someone else’s fundamental
right of freedom of expression? Alice rejects the premise. Bob has no view.
Carol says yes. Ted says speech is violence. Dan has no say, since his views
are repugnant.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">What of Grace? She is busy taking the sword to the village
in order to save it.</span><o:p></o:p></p><p class="MsoNormal"></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjhgh8eo6maeGTSI5fayB3eUuHc1mydlHZDJtZV3vpclRxeGlclFzfFgfCJE0fjVIJ2IzW9cW7rb7LrZtfBu87FGWstRbiksP8FJMsGA6lpKr9CFDWBBxq71caXhs3bLfIay-jpQPO9wHtv/s135/snip2.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="134" data-original-width="135" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjhgh8eo6maeGTSI5fayB3eUuHc1mydlHZDJtZV3vpclRxeGlclFzfFgfCJE0fjVIJ2IzW9cW7rb7LrZtfBu87FGWstRbiksP8FJMsGA6lpKr9CFDWBBxq71caXhs3bLfIay-jpQPO9wHtv/s0/snip2.png" /></a></div><br /><span style="font-family: georgia;"><br /></span><p></p>Cyberleaglehttp://www.blogger.com/profile/17507190182464072147noreply@blogger.com0tag:blogger.com,1999:blog-229721367671779922.post-79795013988511001512021-06-16T23:15:00.002+01:002021-06-16T23:16:20.323+01:00Carved out or carved up? The draft Online Safety Bill and the press<p><span style="font-family: georgia;">When he announced the Online Harms White Paper in April 2019
the then Culture Secretary, Jeremy Wright QC, was at pains to reassure the
press that the proposed regulatory regime would not impinge on press freedom.
He wrote in a <a href="https://www.societyofeditors.org/wp-content/uploads/2019/04/20190410-DCMS-SoS-to-Society-of-Editors.pdf" target="_blank">letter to the Society of Editors</a>:</span></p><p class="MsoNormal"><span style="font-family: georgia;"><o:p></o:p></span></p>
<p class="MsoNormal" style="margin-left: 36.0pt;"><span style="font-family: georgia;">“where these services are already
well regulated, as IPSO and IMPRESS do regarding their members' moderated
comment sections, we will not duplicate those efforts. Journalistic or editorial
content will not be affected by the regulatory framework.” <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The last sentence, at any rate, always seemed like an
impossible promise to fulfil. The government’s subsequent attempts to live up
to it have resulted in some of the more inscrutable elements of the draft
Online Safety Bill.<span style="mso-spacerun: yes;"> </span><o:p></o:p></span></p>
<p class="MsoNormal"><b><span style="font-family: georgia;">Carve-out for news publisher content<o:p></o:p></span></b></p>
<p class="MsoNormal"><span style="font-family: georgia;">It is true that ‘news publisher content’ is carved out of the safety
duties that would be imposed on user to user and search services. <span style="mso-spacerun: yes;"> </span>The exemption is intended to address the problem
that a news publisher’s feed on, for instance, a social media site would
constitute user generated content. As such, without an exemption it would be
directly affected by the social media platform’s own duty of care and
indirectly regulated by Ofcom.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">However, a promise not to affect journalistic or editorial
content goes further than that. First, the commitment is not limited to
broadcasters or newspapers regulated by IPSO or IMPRESS.<span style="mso-spacerun: yes;"> </span>Second, as we shall see, a regulatory framework may still have
an indirect effect on content even if the content is carved out of the framework.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Furthermore, even trying to exclude direct effect gives
rise to a problem. If you want to carve out the press, how do you do so without
giving the government (or Ofcom) power to decide who does and does not qualify
as the press? If a state organ draws that line, isn’t the resulting official
list in itself an exercise in press regulation? We shall see how the draft Bill
has tried to solve this conundrum.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Beneath the surface of the draft Bill lurks a foundational challenge. Its underlying premise is that speech is potentially dangerous, and those
that facilitate it must take precautionary steps to mitigate the danger. That
is the antithesis of the traditional principle that, within boundaries set by
clear and precise laws, we are free to speak as we wish. The mainstream press
may comfort themselves that this novel approach to speech is (for the moment) being
applied only to the evil internet and to the unedited individual speech of
social media users; but it is an unwelcome concept to see take root if you
have spent centuries arguing that freedom of expression is not a fundamental
risk, but a fundamental right.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Even the most voluble press advocates of imposing a duty of
care on internet platforms have offered what seems a slightly muted welcome to these
aspects of the draft Bill. Lord Black, in the House of Lords on 18 May 2021,
(after declaring his interest as deputy chairman of the Telegraph Media Group)
said:<o:p></o:p></span></p>
<p class="MsoNormal" style="margin-left: 36.0pt;"><span style="font-family: georgia;">“The draft Bill includes a robust
and comprehensive exemption for news publishers from its framework of statutory
regulation … . That is absolutely right. During pre-legislative scrutiny of the
Bill, we must ensure that this exemption is both watertight and practical so
that news publishers are not subject to any form of statutory control, and that
there is no scope for the platforms to censor legitimate content.”<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">One might ask what constitutes ‘legitimate’ content and who
- if not the platforms – would decide. Ofcom? At any rate the draft Bill will disappoint
anyone hoping for a duty of care regime that could not have any effect at all on news
publisher content. It is difficult to see how things could be otherwise, the former
Culture Secretary’s promise notwithstanding. <o:p></o:p></span></p>
<p class="MsoNormal"><b><span style="font-family: georgia;">The draft Bill<o:p></o:p></span></b></p>
<p class="MsoNormal"><span style="font-family: georgia;">Now we can embark on a tour of the draft Bill’s attempts to square
the circle of delivering on the former Secretary of State’s promise. First, a
diagram. <o:p></o:p></span></p>
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhfRm8SniRMvAt9Y0-ZBsBQh1v1KNpK8PBPvjA_Fdd549ZicAHL0-Usv8BmIng040MC55ivbbRVSshTZtiKUHx2AlUYq_PEVzBM_8h74IiXEawdotZcKeP4ibFAx_0ZeytegVsXhJC7LH77/s1349/Press+journalism+Online+Safety-revised.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="933" data-original-width="1349" height="276" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhfRm8SniRMvAt9Y0-ZBsBQh1v1KNpK8PBPvjA_Fdd549ZicAHL0-Usv8BmIng040MC55ivbbRVSshTZtiKUHx2AlUYq_PEVzBM_8h74IiXEawdotZcKeP4ibFAx_0ZeytegVsXhJC7LH77/w400-h276/Press+journalism+Online+Safety-revised.png" width="400" /></a></div><br /><p class="MsoNormal"><span style="font-family: georgia;">Got that? Probably not.</span></p>
<p class="MsoNormal"><span style="font-family: georgia;">So let us conduct a point by point examination of how the
draft Bill tries to exclude the press from its regulatory ambit, and consider how
far it succeeds. The News Media Association’s <a href="http://www.newsmediauk.org/write/MediaUploads/PDF%20Docs/Online_Harms_White_Paper_News_Media_Association_Response_1_July_2019_sr.pdf" target="_blank">submission to the White Paper consultation</a>, to which I will refer, contained a list of what the NMA thought
the legislation should do in order to carve out the press.
Unsurprisingly, the draft Bill falls short. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">But first, a note on terminology: it is easy to slip
into using ‘platforms’ to describe those organisations in scope. We immediately
think of Facebook, Twitter, YouTube, TikTok, Instagram and the rest. But it is
not only about them: the government estimates that 24,000 companies and
organisations will be in scope. That is everyone from the largest players to an
MP’s discussion app, via Mumsnet and the local sports club discussion forum. So,
in an effort not to lose sight of who is in scope, I shall adopt the dismally anodyne
‘U2U provider’.<o:p></o:p></span></p>
<p class="MsoNormal"><b><span style="font-family: georgia;">Moderated comments sections<o:p></o:p></span></b></p>
<p class="MsoNormal"><span style="font-family: georgia;">The first limb of the Secretary of State’s commitment was to
avoid duplicating existing regulation of moderated comments sections on
newspapers’ own websites. That has been achieved not by a press-specific
exemption, but through the draft Bill’s general exclusion of low risk ‘limited
functionality’ services. This provision exempts services in which users are
able to communicate only in the following ways: posting comments or reviews
relating to content produced or published by the provider of the service (or by
a person acting on behalf of the provider), and in various specified related
ways (such as ‘like’ or ‘dislike’ buttons). <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">This exemption as drafted has problems, since technically (even
if not contractually) a user is able to post anything to a non-proactively
moderated free text review section. That could comments on comments – a degree
of freedom which of itself appears to be disqualifying - even if the intended
purpose is that the facility should be used only for reviewing the provider’s
own content.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">As for the protection that the exemption tangentially offers
to comments sections on press websites, it is notable that it can be repealed or
amended by secondary legislation, if the Secretary of State considers that to
be appropriate because of the risk of physical or psychological harm to
individuals in the UK presented by a service of the description in question. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;"><b>News publisher content – what is it?</b><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">News publisher content present on a service is exempted from
the service provider’s safety duties. There are two primary categories of news
publisher content: that generated by UK-regulated broadcasters and that generated
by other recognised news publishers. The latter have to meet a number of
qualifying conditions, both administrative and substantive. <o:p></o:p></span></p>
<p class="MsoNormal"><i><span style="font-family: georgia;">Administrative conditions<o:p></o:p></span></i></p>
<p class="MsoNormal"><span style="font-family: georgia;">Administratively, a putative recognised news publisher must:</span></p><p class="MsoNormal"><span style="font-family: georgia; text-indent: -18pt;"><span> </span>(a)<span style="font-stretch: normal; font-variant-east-asian: normal; font-variant-numeric: normal; line-height: normal;">
</span></span><span style="font-family: georgia; text-indent: -18pt;">Be an entity (i.e. an incorporated or
unincorporated body or association of persons or an organisation) </span><span style="font-family: georgia; text-indent: -18pt;"> </span></p><p class="MsoNormal"><span style="font-family: georgia; text-indent: 36pt;"><span> </span>(b) have a registered office or
other business address in the UK</span></p><p class="MsoNormal"><span style="font-family: georgia; text-indent: 36pt;"><span> </span>(c) be the person with legal
responsibility for material published by it in the UK</span></p><p class="MsoNormal"><span style="font-family: georgia;"><span> </span>(d) publish (by any means
including broadcasting) the name address, and registered number <span> </span>(if any) of the
entity; and publish the name and address (and where relevant, registered or
principal office and registered number) of any person who controls the entity (control
meaning the same as in the Broadcasting Act).</span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Failure to meet any of these conditions would be fatal to an
argument that the entity’s output qualified as news publisher content.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Organisations proscribed under the Terrorism Act 2000, or
the purpose of which is to support a proscribed organisation, are expressly
excluded from the news publisher exemption.<o:p></o:p></span></p>
<p class="MsoNormal"><i><span style="font-family: georgia;">Substantive conditions<o:p></o:p></span></i></p>
<p class="MsoNormal"><span style="font-family: georgia;">Substantively, the entity must:</span></p><p class="MsoNormal"><span style="font-family: georgia; text-indent: -18pt;"><span style="font-stretch: normal; font-variant-east-asian: normal; font-variant-numeric: normal; line-height: normal;"><span> </span>(a) </span></span><span style="font-family: georgia; text-indent: -18pt;">Have as its principal purpose the publication of
news-related material, such material being created by different persons and
being subject to editorial control.</span></p><p class="MsoNormal"><span style="font-family: georgia; text-indent: -18pt;"><span> </span>(b) Publish
such material in the course of a business (whether or not carried on with a
view to profit)</span></p><p class="MsoNormal"><span style="font-family: georgia; text-indent: -18pt;"><span> </span>(c) Be subject to a standards code (one published
either by an independent regulator or by the entity itself)</span></p><p class="MsoNormal"><span style="font-family: georgia; text-indent: -18pt;"><span> </span>(d) Have
policies and procedures for handling and resolving complaints.</span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Again, failure to meet any of these conditions would be
fatal.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">‘News-related material’ has the same definition as in the
Crime and Courts Act 2003:</span></p><p class="MsoNormal" style="text-indent: 0px;"><span style="font-family: georgia; text-indent: -18pt;"><span> </span>(a) News or information about current affairs</span></p><p class="MsoNormal" style="text-indent: 0px;"><span style="font-family: georgia; text-indent: -18pt;"><span> </span>(b) Opinion
about matters relating to the news or current affairs; or</span></p><p class="MsoNormal" style="text-indent: 0px;"><span style="font-family: georgia; text-indent: -18pt;"><span> </span>(c) Gossip about celebrities, other public figures
or other persons in the news.</span></p>
<p class="MsoNormal"><span style="font-family: georgia;">News-related material is ‘subject to editorial control’ if
there is a person (whether or not the publisher of the material) who has
editorial or equivalent responsibility for the material, including
responsibility for how it is presented and the decision to publish it.<o:p></o:p></span></p>
<p class="MsoNormal"><i><span style="font-family: georgia;">Reposted news publisher material<o:p></o:p></span></i></p>
<p class="MsoNormal"><span style="font-family: georgia;">The draft Bill also contains limited exemptions for news
publisher content reposted by other users. To qualify, the material must be
uploaded to or shared on the service by a user of the service, and:</span></p><p class="MsoNormal" style="text-indent: 0px;"><span style="font-family: georgia; text-indent: -18pt;"><span> </span>(a) Reproduce in full an article or written item
originally published by a recognised news publisher (but not be a screenshot or
photograph of that article or item or of part of it);</span></p><p class="MsoNormal" style="text-indent: 0px;"><span style="font-family: georgia; text-indent: -18pt;"><span> </span>(b) Be
a recording of an item originally broadcast by a recognised news publisher (but
not be an excerpt of such a recording); or</span></p><p class="MsoNormal" style="text-indent: 0px;"><span style="font-family: georgia; text-indent: -18pt;"><span> </span>(c) Be a link to a full article or written item
originally published, or to a full recording of an item originally broadcast,
by a recognised news publisher.</span></p>
<p class="MsoNormal"><b><span style="font-family: georgia;">What isn’t exempted?<o:p></o:p></span></b></p>
<p class="MsoNormal"><span style="font-family: georgia;">What news-related content would fall outside the exemptions
from the U2U provider’s safety duties? Some of the most relevant are:</span></p><p class="MsoNormal"></p><ul style="text-align: left;"><li><span style="font-family: georgia; text-indent: -18pt;">The user reposting exemption does not apply to </span><i style="font-family: georgia; text-indent: -18pt;">quotations,
snippets, excerpts, screenshots</i><span style="font-family: georgia; text-indent: -18pt;"> and the like.</span></li><li><span style="font-family: georgia; text-indent: -18pt;">Content from </span><i style="font-family: georgia; text-indent: -18pt;">non-UK news publishers</i><span style="font-family: georgia; text-indent: -18pt;"> will
not be exempt unless they are able to jump through the administrative and
substantive hoops described above.</span><span style="font-family: georgia; text-indent: -18pt;"> </span><span style="font-family: georgia; text-indent: -18pt;">The
requirement to have a registered office or other business address in the UK would
itself seem likely to exclude the vast majority of non-UK news providers.</span></li><li><i style="font-family: georgia; text-indent: -18pt;">Individual journalist accounts</i><span style="font-family: georgia; text-indent: -18pt;">. Many well
known broadcast and news journalists have their own Twitter or other social
media accounts and make use of them prolifically to report on current news. These
are outside the primary exemption, since an individual journalist is not a
recognised news publisher. (Some of what individual journalists do would, of
course, fall within the re-posting exemption.) The NMA argued that the exemption
must apply to “the news publishers, corporately and individually to all their
workforce and contributors”.</span></li></ul><p></p>
<p class="MsoNormal"><span style="font-family: georgia;">One opaque aspect of the exemption is what is meant by
content “generated” by a recognised news publisher. If a newspaper publishes a
story incorporating an embedded link to a TikTok video (as the Daily Mail did
recently with the video from a migrant boat crossing the Channel), is the link part
of the content generated by the news publisher? If so, is it anomalous that the
story – including the embedded video - on the news publisher’s own site, subsequently posted
to (say) Twitter, is exempt from Twitter’s safety duty, yet the same video
originally posted on TikTok is still within scope of TikTok’s safety duty?<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The example of amateur video uploaded from a migrant boat
brings us neatly to the topic of citizen journalism. Citizen journalism is
within scope of U2U providers’ safety duties and, for ordinary U2U providers,
enjoys no special status over and above any other user generated content. </span></p><p class="MsoNormal"><span style="font-family: georgia;">Large
players (Category 1 providers) will have a variety of freedom of expression
duties imposed on them, applicable to UK-linked news publisher content or journalistic
content, as well some duties in respect of so-called content of democratic
importance. The duties will include, for instance, an obligation to specify in
terms and conditions by what method journalistic content is to be identified.
Since the draft Bill says only that journalistic content is content ‘generated
for the purposes of journalism’, identifying such content looks like a tall
order.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The journalistic content provisions are likely to run into
criticism from opposing ends: on the one hand that some users will rely on them
as a smokescreen to protect what is in reality non-journalistic material; and that
on the other hand, the concept is too vague to be of real use, so in
practice hands the decision on how to categorise to Ofcom. <o:p></o:p></span></p>
<p class="MsoNormal"><b><span style="font-family: georgia;">What is the significance of news publisher content being
exempted?<o:p></o:p></span></b></p>
<p class="MsoNormal"><span style="font-family: georgia;">The news publisher content exemption means that U2U
providers do not have a safety duty for news publisher content. In other words,
they are not obliged to include news publisher content in the various steps
that they are required to take to fulfil their safety duties.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">That does not mean that news
publisher content could not be affected as a by-product of U2U providers' attempts to discharge their safety duties over other user content. U2U
providers not being required proactively to monitor and inhibit news
publisher content doesn’t mean that such content couldn’t be caught up in a
provider’s efforts to do that for user generated content generally.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Lord Black spoke of precluding any scope for platforms to censor legitimate content. The closest the draft Bill’s general provisions come is the duty 'to have regard to the importance of freedom of expression’. For
Category 1 providers the focus is additionally on dedicated, expedited
complaints procedures and transparency of terms and conditions. </span></p><p class="MsoNormal"><span style="font-family: georgia;">The Impact
Assessment concludes, under Freedom of Expression, that the regulatory model’s
focus on transparency and user reporting and redress should lead to “some
improvements” in users’ ability to appeal content removal and get this
reinstated, “with a positive impact on freedom of expression”.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The Policy Risks table annexed to the Impact Assessment goes
into more detail:<o:p></o:p></span></p>
<table border="1" cellpadding="0" cellspacing="0" class="MsoTableGrid" style="border-collapse: collapse; border: none; mso-border-alt: solid windowtext .5pt; mso-padding-alt: 0cm 5.4pt 0cm 5.4pt; mso-yfti-tbllook: 1184;">
<tbody><tr style="mso-yfti-firstrow: yes; mso-yfti-irow: 0;">
<td style="border: solid windowtext 1.0pt; mso-border-alt: solid windowtext .5pt; padding: 0cm 5.4pt 0cm 5.4pt; width: 101.2pt;" valign="top" width="135">
<p class="MsoNormal" style="line-height: normal; margin-bottom: 0cm;"><b><span style="font-family: georgia;">Risk<o:p></o:p></span></b></p>
</td>
<td style="border-left: none; border: solid windowtext 1.0pt; mso-border-alt: solid windowtext .5pt; mso-border-left-alt: solid windowtext .5pt; padding: 0cm 5.4pt 0cm 5.4pt; width: 349.6pt;" valign="top" width="466">
<p class="MsoNormal" style="line-height: normal; margin-bottom: 0cm;"><b><span style="font-family: georgia;">Mitigation<o:p></o:p></span></b></p>
</td>
</tr>
<tr style="mso-yfti-irow: 1; mso-yfti-lastrow: yes;">
<td style="border-top: none; border: solid windowtext 1.0pt; mso-border-alt: solid windowtext .5pt; mso-border-top-alt: solid windowtext .5pt; padding: 0cm 5.4pt 0cm 5.4pt; width: 101.2pt;" valign="top" width="135">
<p class="MsoNormal" style="line-height: normal; margin-bottom: 0cm;"><span style="font-family: georgia;">Regulation
disproportionately impacts on freedom of expression, by incentivising or
requiring content takedown.<o:p></o:p></span></p>
</td>
<td style="border-bottom: solid windowtext 1.0pt; border-left: none; border-right: solid windowtext 1.0pt; border-top: none; mso-border-alt: solid windowtext .5pt; mso-border-left-alt: solid windowtext .5pt; mso-border-top-alt: solid windowtext .5pt; padding: 0cm 5.4pt 0cm 5.4pt; width: 349.6pt;" valign="top" width="466">
<p class="MsoNormal" style="line-height: normal; margin-bottom: 0cm;"><span style="font-family: georgia;">The approach
has built in appropriate safeguards to ensure protections for freedom of
expression, including: <o:p></o:p></span></p>
<p class="MsoNormal" style="line-height: normal; margin-bottom: 0cm;"><span style="font-family: georgia;">●
Differentiated approach of legal/illegal content, e.g. not requiring takedown
of legal but harmful content <o:p></o:p></span></p>
<p class="MsoNormal" style="line-height: normal; margin-bottom: 0cm;"><span style="font-family: georgia;">● Safeguards
for journalistic content <o:p></o:p></span></p>
<p class="MsoNormal" style="line-height: normal; margin-bottom: 0cm;"><span style="font-family: georgia;">● Effective
transparency reporting <o:p></o:p></span></p>
<p class="MsoNormal" style="line-height: normal; margin-bottom: 0cm;"><span style="font-family: georgia;">●
Proportionate enforcement sanctions to avoid incentivising takedowns <o:p></o:p></span></p>
<p class="MsoNormal" style="line-height: normal; margin-bottom: 0cm;"><span style="font-family: georgia;">● User
redress mechanisms will enable challenge to takedown <o:p></o:p></span></p>
<p class="MsoNormal" style="line-height: normal; margin-bottom: 0cm;"><span style="font-family: georgia;">●
Super-complaints will allow organisations to lodge complaints where they may
be concerned about disproportionate impacts <o:p></o:p></span></p>
<p class="MsoNormal" style="line-height: normal; margin-bottom: 0cm;"><span style="font-family: georgia;">● Regulator
has a duty to consider freedom of expression<o:p></o:p></span></p>
</td>
</tr>
</tbody></table>
<p class="MsoNormal"><span style="font-family: georgia;"><span style="color: white; mso-color-alt: windowtext;"><span style="mso-spacerun: yes;"> </span></span><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The Impact Assessment summarises the government’s final
policy position thus:<o:p></o:p></span></p>
<p class="MsoNormal" style="margin-left: 36.0pt;"><span style="font-family: georgia;">“There will … be strong
safeguards in place to ensure media freedom is upheld. Content and articles
published by news media on their own sites will not be considered user
generated content and thus will be out of regulatory scope. <o:p></o:p></span></p>
<p class="MsoNormal" style="margin-left: 36.0pt;"><span style="font-family: georgia;">Legislation will also include
robust protections for journalistic content on in-scope services. Firstly, the
legislation will provide a clear exemption for news publishers’ content. This
means platforms will not have any new legal duties for these publishers’
content as a result of our legislation. Secondly, the legislation will oblige
Category 1 companies to put in place safeguards for all journalistic content
shared on their platforms. The safeguards will ensure that platforms consider
the importance of journalism when undertaking content moderation, and can be
held to account for the removal of journalistic content, including with respect
to automated moderation tools.”<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">At the moment it is anyone’s guess what the various duties
would mean when crystallised into practical requirements – a vice ingrained throughout the
draft Bill. We will know only when Ofcom, however many years down the line, produces
its series of safety Codes of Practice for the various different kinds of U2U
service. A U2U provider would (unless it decides to take the brave route of
claiming compliance with the safety duties in ways other than those set out in
a Code of Practice) have to comply with whatever the applicable Code of
Practice may say about freedom of expression. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">If Ofcom were to go down the route of suggesting in a Code
of Practice that news publisher content should be walled off from being indirectly
affected by implementation of the providers’ safety duties, how could that be achieved?
The spectre of an Ofcom-approved list of news publisher content providers rears
its head again. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Even if there were such a list, how would such content be
identified and separated out in practice? The NMA consultation submission suggested a system of ‘kite
marking’. IT engineers could still be trying to build tagging systems to make that
work in ten years’ time. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The government’s draft Online Safety Bill announcement
claimed that the measures required of ordinary and large providers would “<i>remove</i>
the risk that online companies adopt restrictive measures or over-remove
content in their efforts to meet their new online safety duties.”
(emphasis added)<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">This bold statement – in contrast with the more modest claim
in the Impact Assessment - shows every sign of being another unfulfillable
promise, whether for news publisher content or user-generated content
generally. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Lord Black said in the Lords debate:<o:p></o:p></span></p>
<p class="MsoNormal" style="margin-left: 36.0pt;"><span style="font-family: georgia;">“We have the opportunity with
this legislation to lead the world in ensuring proper regulation of news
content on the internet, and to show how that can be reconciled with protecting
free speech and freedom of expression. It is an opportunity we should seize.”<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">It can be no real surprise that a solution to squaring that circle is as elusive now as when the Secretary of State wrote to the Society of Editors two years ago. It has every prospect of remaining so.</span><o:p></o:p></p><p class="MsoNormal"></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjhgh8eo6maeGTSI5fayB3eUuHc1mydlHZDJtZV3vpclRxeGlclFzfFgfCJE0fjVIJ2IzW9cW7rb7LrZtfBu87FGWstRbiksP8FJMsGA6lpKr9CFDWBBxq71caXhs3bLfIay-jpQPO9wHtv/s135/snip2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="134" data-original-width="135" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjhgh8eo6maeGTSI5fayB3eUuHc1mydlHZDJtZV3vpclRxeGlclFzfFgfCJE0fjVIJ2IzW9cW7rb7LrZtfBu87FGWstRbiksP8FJMsGA6lpKr9CFDWBBxq71caXhs3bLfIay-jpQPO9wHtv/s0/snip2.png" /></a></div><br /><span style="font-family: georgia;"><br /></span><p></p>Cyberleaglehttp://www.blogger.com/profile/17507190182464072147noreply@blogger.com0tag:blogger.com,1999:blog-229721367671779922.post-31725198967167005852021-06-08T10:58:00.001+01:002021-06-08T15:49:55.836+01:00Big Brother Watch/Rättvisa – a multifactorial puzzle<p><span style="font-family: georgia;">The European Court of Human Rights Grand Chamber has now
delivered its long awaited <a href="https://hudoc.echr.coe.int/eng#{%22itemid%22:[%22001-210077%22]}" target="_blank">judgment in </a><i><a href="https://hudoc.echr.coe.int/eng#{%22itemid%22:[%22001-210077%22]}" target="_blank">Big Brother Watch</a>. </i> It always seemed a bit of a stretch that the Strasbourg
Court would tell the UK to close down the bulk (so to speak) of GCHQ’s
operations, especially since 15 years ago the <i><a href="https://hudoc.echr.coe.int/eng#{%22itemid%22:[%22001-76586%22]}" target="_blank">Weber/Saravia</a> </i>decision had accepted the principle of bulk communications surveillance (albeit in a world in which digital communications were not yet ubiquitous). </span></p><p><span style="font-family: georgia;">So it proved.
The Court’s <i>Big Brother Watch</i> judgment (and its companion judgment in the </span><span style="font-family: georgia;">Swedish <a href="https://hudoc.echr.coe.int/eng#{%22itemid%22:[%22001-210078%22]}" target="_blank"><i>Centrum för Rättvisa</i></a> case)</span><span style="font-family: georgia;"> lay down a revised set of fundamental rights criteria by which to assess
bulk surveillance regimes, but do not forbid them as such.</span></p><p class="MsoNormal"><span style="font-family: georgia;"><o:p></o:p></span></p>
<p class="MsoNormal"><b><span style="font-family: georgia;">The Grand Chamber’s approach<o:p></o:p></span></b></p>
<p class="MsoNormal"><span style="font-family: georgia;">The twin judgments are notable for advancing further down
the path of assessing a surveillance regime not by drawing <a href="https://www.cyberleagle.com/2015/07/red-lines-and-no-go-zones-coming.html" target="_blank">red lines</a> that must not
be crossed, but by applying a multifactorial evaluation of criteria that feed into a “global assessment” of the regime's compliance with the “provided by
law” and “necessary in a democratic society” requirements of the Convention. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The “provided by law” Convention requirement is that a measure
must have some basis in law, and also have the quality of law: be publicly
accessible and sufficiently certain and precise so as to be foreseeable in its
effects. The scope of any discretion to exercise a surveillance power must be
indicated with sufficient clarity to provide adequate protection against
arbitrary interference. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The conundrum that faces a human rights court is how such traditional
rule of law requirements – certainty of law, foreseeability of legal effects, accessibility
of a legal regime – can be applied to the inherently secret and discretionary nature of communications
surveillance. The answer has been to import the notion that safeguards (such as
independent oversight) can compensate for lack of openness, so long as the kind
of circumstances in which communications surveillance may take place are clearly
set out in legislation, supplemented if necessary by instruments such as codes
of practice. The ECtHR’s particular focus on the role of safeguards is facilitated by its
policy of considering the “provided by law” test jointly with whether the
interference constituted by a given regime is “necessary in a democratic society”
(<i>BBW</i> [334], <i>R</i></span><span style="font-family: georgia;"><i>ä</i></span><i style="font-family: georgia;">ttvisa</i><span style="font-family: georgia;"> [248]).</span></p>
<p class="MsoNormal"><span style="font-family: georgia;">It is not a straightforward task to decide at what point safeguards
sufficiently compensate for the rule of law deficiencies presented by secret exercise of a discretionary power. The Grand
Chamber describes the role of safeguards in bulk interception of digital
communications as “pivotal and yet elusive” (<i>BBW</i> [322], <i>R</i></span><span style="font-family: georgia;"><i>ä</i></span><i style="font-family: georgia;">ttvisa</i><span style="font-family: georgia;"> [236]). </span></p><p class="MsoNormal"><span style="font-family: georgia;">It is hard to avoid the conclusion that the search for this will o’the wisp is
ultimately a matter of impression – the more so, the further the evaluation strays
from red lines that cannot be crossed towards an overall multifactorial
assessment, the result of which depends on how much weight the court chooses to
give to each factor.</span></p>
<p class="MsoNormal"><b><span style="font-family: georgia;">Bulk interception not <i>per se</i> unlawful<o:p></o:p></span></b></p>
<p class="MsoNormal"><span style="font-family: georgia;">The challenge that faces a party seeking to strike down a bulk
interception regime is how to bring a substantive objection – that a bulk
communications surveillance regime is inherently repugnant - within the
framework of a “quality of law” and “necessity” challenge. The argument will be that
the interference with privacy and (perhaps) freedom of expression entailed by bulk
communications interception is so great that, although useful, bulk
communications interception does not pass the “necessity” test. This is the
kind of argument that succeeded in the <i><a href="https://hudoc.echr.coe.int/eng#{%22dmdocnumber%22:[%22843941%22],%22itemid%22:[%22001-90051%22]}">Marper</a></i> case on blanket retention of DNA, fingerprint and cellular samples. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">In the <i>BBW</i> and <i>R</i></span><span style="font-family: georgia;"><i>ä</i></span><span style="font-family: georgia;"><i>ttvisa</i> cases the Grand Chamber held that a decision
to operate a bulk interception regime continues to fall within the competence
(“margin of appreciation”) of a Contracting State. Their freedom of choice in how to operate such
a regime is, however, more constrained. (<i>BBW</i> [340, 347], <i>R</i></span><span style="font-family: georgia;"><i>ä</i></span><i style="font-family: georgia;">ttvisa</i><span style="font-family: georgia;">
[254, 261])</span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Another way of stating the objection to such a regime might
be that, given the scale of the interference, no amount of safeguards can compensate for the lack of foreseeability inherent in the secret exercise of bulk communications surveillance
powers. However, in reality once necessity is surmounted in
principle, the examination moves on to whether the combination of accessibility,
precision of rules and compensating safeguards embodied in the regime under challenge is sufficient for Convention compliance.<o:p></o:p></span></p>
<p class="MsoNormal"><b><span style="font-family: georgia;">The Court’s decision on RIPA <o:p></o:p></span></b></p>
<p class="MsoNormal"><span style="font-family: georgia;">In <i>BBW</i> the UK’s now superseded RIPA (Regulation of
Investigatory Powers Act 2000) regime was under challenge. As in the <a href="https://hudoc.echr.coe.int/eng#{%22appno%22:[%2258170/13%22],%22documentcollectionid2%22:[%22JUDGMENTS%22,%22ADMISSIBILITY%22,%22COMMUNICATEDCASES%22,%22CLIN%22,%22ADVISORYOPINIONS%22,%22REPORTS%22,%22RESOLUTIONS%22],%22itemid%22:[%22001-186048%22]}" target="_blank">Chamber judgment in 2018</a> the Grand Chamber found the UK regime wanting. But it did so
in slightly different ways:<o:p></o:p></span></p>
<table border="1" cellpadding="0" cellspacing="0" class="MsoTableGrid" style="border-collapse: collapse; border: none; mso-border-alt: solid windowtext .5pt; mso-padding-alt: 0cm 5.4pt 0cm 5.4pt; mso-yfti-tbllook: 1184;">
<tbody><tr>
<td style="border: 1pt solid windowtext; mso-border-alt: solid windowtext .5pt; padding: 0cm 5.4pt; width: 219.5pt;" valign="top" width="293">
<p class="MsoNormal" style="line-height: normal; margin-bottom: 0cm;"><b><span style="font-family: georgia;">Chamber<o:p></o:p></span></b></p>
</td>
<td style="border-left: none; border: 1pt solid windowtext; mso-border-alt: solid windowtext .5pt; mso-border-left-alt: solid windowtext .5pt; padding: 0cm 5.4pt; width: 231.3pt;" valign="top" width="308">
<p class="MsoNormal" style="line-height: normal; margin-bottom: 0cm;"><b><span style="font-family: georgia;">Grand
Chamber<o:p></o:p></span></b></p>
</td>
</tr>
<tr>
<td style="border-top: none; border: 1pt solid windowtext; mso-border-alt: solid windowtext .5pt; mso-border-top-alt: solid windowtext .5pt; padding: 0cm 5.4pt; width: 219.5pt;" valign="top" width="293">
<p class="MsoNormal" style="line-height: normal; margin-bottom: 0cm;"><b><span style="font-family: georgia;">Article 8<o:p></o:p></span></b></p>
</td>
<td style="border-bottom: 1pt solid windowtext; border-left: none; border-right: 1pt solid windowtext; border-top: none; mso-border-alt: solid windowtext .5pt; mso-border-left-alt: solid windowtext .5pt; mso-border-top-alt: solid windowtext .5pt; padding: 0cm 5.4pt; width: 231.3pt;" valign="top" width="308">
<p class="MsoNormal" style="line-height: normal; margin-bottom: 0cm;"><b><span style="font-family: georgia;"> </span></b></p>
</td>
</tr>
<tr>
<td style="border-top: none; border: 1pt solid windowtext; mso-border-alt: solid windowtext .5pt; mso-border-top-alt: solid windowtext .5pt; padding: 0cm 5.4pt; width: 219.5pt;" valign="top" width="293">
<p class="MsoNormal" style="line-height: normal; margin-bottom: 0cm;"><span style="font-family: georgia;">Bulk
interception: lack of provision for sufficient oversight of the entire selection
process, specifically search criteria and selectors [387, 388]<o:p></o:p></span></p>
</td>
<td style="border-bottom: 1pt solid windowtext; border-left: none; border-right: 1pt solid windowtext; border-top: none; mso-border-alt: solid windowtext .5pt; mso-border-left-alt: solid windowtext .5pt; mso-border-top-alt: solid windowtext .5pt; padding: 0cm 5.4pt; width: 231.3pt;" valign="top" width="308">
<p class="MsoNormal" style="line-height: normal; margin-bottom: 0cm;"><span style="font-family: georgia;">Lack of
independent authorisation at the outset [377]<o:p></o:p></span></p>
<p class="MsoNormal" style="line-height: normal; margin-bottom: 0cm;"><o:p><span style="font-family: georgia;"> </span></o:p></p>
<p class="MsoNormal" style="line-height: normal; margin-bottom: 0cm;"><span style="font-family: georgia;">Lack of
provision for oversight of categories of selectors at point of authorisation;
lack of provision for enhanced safeguards for use of strong selectors linked
to identifiable individuals [383]<o:p></o:p></span></p>
<p class="MsoNormal" style="line-height: normal; margin-bottom: 0cm;"><o:p><span style="font-family: georgia;"> </span></o:p></p>
<p class="MsoNormal" style="line-height: normal; margin-bottom: 0cm;"><span style="font-family: georgia;">Insufficiently
precise nature of SoS certificate as to descriptions of material necessary to
be examined [386, 387, 391] <o:p></o:p></span></p>
<p class="MsoNormal" style="line-height: normal; margin-bottom: 0cm;"><o:p><span style="font-family: georgia;"> </span></o:p></p>
<p class="MsoNormal" style="line-height: normal; margin-bottom: 0cm;"><span style="font-family: georgia;">All
applicable to both content and RCD [416]<o:p></o:p></span></p>
</td>
</tr>
<tr>
<td style="border-top: none; border: 1pt solid windowtext; mso-border-alt: solid windowtext .5pt; mso-border-top-alt: solid windowtext .5pt; padding: 0cm 5.4pt; width: 219.5pt;" valign="top" width="293">
<p class="MsoNormal" style="line-height: normal; margin-bottom: 0cm;"><span style="font-family: georgia;">Bulk
interception: examination of related communications data (RCD) exempted from
all safeguards applicable to content, such as S.16(2) ‘British Islands’
restriction applicable to content. [357, 387, 388]<o:p></o:p></span></p>
</td>
<td style="border-bottom: 1pt solid windowtext; border-left: none; border-right: 1pt solid windowtext; border-top: none; mso-border-alt: solid windowtext .5pt; mso-border-left-alt: solid windowtext .5pt; mso-border-top-alt: solid windowtext .5pt; padding: 0cm 5.4pt; width: 231.3pt;" valign="top" width="308">
<p class="MsoNormal" style="line-height: normal; margin-bottom: 0cm;"><span style="font-family: georgia;">Lack of
‘British Islands’ restriction for RCD is not decisive in overall assessment [421];
different storage periods for RCD (“several months”) were not evident in the
Interception Code. Should be included in legislative and/or other general
measures [423]<o:p></o:p></span></p>
</td>
</tr>
<tr>
<td style="border-top: none; border: 1pt solid windowtext; mso-border-alt: solid windowtext .5pt; mso-border-top-alt: solid windowtext .5pt; padding: 0cm 5.4pt; width: 219.5pt;" valign="top" width="293">
<p class="MsoNormal" style="line-height: normal; margin-bottom: 0cm;"><span style="font-family: georgia;">Communications
data acquisition: Violation of EU law meant that acquisition could not be in
accordance with the law [467, 468]<o:p></o:p></span></p>
</td>
<td style="border-bottom: 1pt solid windowtext; border-left: none; border-right: 1pt solid windowtext; border-top: none; mso-border-alt: solid windowtext .5pt; mso-border-left-alt: solid windowtext .5pt; mso-border-top-alt: solid windowtext .5pt; padding: 0cm 5.4pt; width: 231.3pt;" valign="top" width="308">
<p class="MsoNormal" style="line-height: normal; margin-bottom: 0cm;"><span style="font-family: georgia;">Not contested
[521, 522]<o:p></o:p></span></p>
</td>
</tr>
<tr>
<td style="border-top: none; border: 1pt solid windowtext; mso-border-alt: solid windowtext .5pt; mso-border-top-alt: solid windowtext .5pt; padding: 0cm 5.4pt; width: 219.5pt;" valign="top" width="293">
<p class="MsoNormal" style="line-height: normal; margin-bottom: 0cm;"><b><span style="font-family: georgia;">Article 10<o:p></o:p></span></b></p>
</td>
<td style="border-bottom: 1pt solid windowtext; border-left: none; border-right: 1pt solid windowtext; border-top: none; mso-border-alt: solid windowtext .5pt; mso-border-left-alt: solid windowtext .5pt; mso-border-top-alt: solid windowtext .5pt; padding: 0cm 5.4pt; width: 231.3pt;" valign="top" width="308">
<p class="MsoNormal" style="line-height: normal; margin-bottom: 0cm;"><o:p><span style="font-family: georgia;"> </span></o:p></p>
</td>
</tr>
<tr>
<td style="border-top: none; border: 1pt solid windowtext; mso-border-alt: solid windowtext .5pt; mso-border-top-alt: solid windowtext .5pt; padding: 0cm 5.4pt; width: 219.5pt;" valign="top" width="293">
<p class="MsoNormal" style="line-height: normal; margin-bottom: 0cm;"><span style="font-family: georgia;">Bulk
interception: lack of protection for journalistic privilege at selection and
examination stage (content and RCD) [493, 495, 500]<o:p></o:p></span></p>
</td>
<td style="border-bottom: 1pt solid windowtext; border-left: none; border-right: 1pt solid windowtext; border-top: none; mso-border-alt: solid windowtext .5pt; mso-border-left-alt: solid windowtext .5pt; mso-border-top-alt: solid windowtext .5pt; padding: 0cm 5.4pt; width: 231.3pt;" valign="top" width="308">
<p class="MsoNormal" style="line-height: normal; margin-bottom: 0cm;"><span style="font-family: georgia;">As per Art 8;
additionally, no requirement for a judge or similar to decide whether use of
selectors or search terms known to be connected to a journalist was justified
by an overriding requirement in the public interest; or whether a less
intrusive measure might have sufficed [456]; <o:p></o:p></span></p>
<p class="MsoNormal" style="line-height: normal; margin-bottom: 0cm;"><o:p><span style="font-family: georgia;"> </span></o:p></p>
<p class="MsoNormal" style="line-height: normal; margin-bottom: 0cm;"><span style="font-family: georgia;">Nor provision
for similar authorisation of continued storage and examination of
confidential journalistic material once a connection to a journalist became
known. [457]<o:p></o:p></span></p>
</td>
</tr>
<tr>
<td style="border-top: none; border: 1pt solid windowtext; mso-border-alt: solid windowtext .5pt; mso-border-top-alt: solid windowtext .5pt; padding: 0cm 5.4pt; width: 219.5pt;" valign="top" width="293">
<p class="MsoNormal" style="line-height: normal; margin-bottom: 0cm;"><span style="font-family: georgia;">Communications
data acquisition: insufficiently broad journalistic privilege protections
[499, 500]<o:p></o:p></span></p>
</td>
<td style="border-bottom: 1pt solid windowtext; border-left: none; border-right: 1pt solid windowtext; border-top: none; mso-border-alt: solid windowtext .5pt; mso-border-left-alt: solid windowtext .5pt; mso-border-top-alt: solid windowtext .5pt; padding: 0cm 5.4pt; width: 231.3pt;" valign="top" width="308">
<p class="MsoNormal" style="line-height: normal; margin-bottom: 0cm;"><span style="font-family: georgia;">Not contested
[527, 528]<o:p></o:p></span></p>
</td>
</tr>
</tbody></table>
<p class="MsoNormal"><span style="font-family: georgia;">The main concrete point of difference from the Chamber judgment is probably
the Grand Chamber's emphasis on prior independent authorisation. That, in the form of Judicial
Commissioner approval of the Secretary of State’s decision to issue a warrant, is
now a feature of the Investigatory Powers Act 2016 which has superseded RIPA.</span></p>
<p class="MsoNormal"><span style="font-family: georgia;">It is difficult to predict specific implications of the two Grand Chamber judgments for
the IP Act. This is due to the Court’s already noted holistic, multifactorial approach to fundamental rights compliance. Although in places the Grand Chamber speaks of ‘minimum requirements’ –
which might suggest a cumulative set of threshold conditions – in others it
speaks of ‘shortcomings’ that inform the overall assessment and may be
compensated for by other features of the regime. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">This approach is more prominent in the <i>R</i></span><span style="font-family: georgia;"><i>ä</i></span><span style="font-family: georgia;"><i>ttvisa</i>
judgment, in which the Court held that while certain safeguards did compensate for
identified shortcomings in the Swedish regime, they did not do so sufficiently. </span><span style="font-family: georgia;">The </span><i style="font-family: georgia;">BBW</i><span style="font-family: georgia;"> judgment, while also adopting the “global assessment” approach, is
in substance a starker exercise in striking down the RIPA regime owing to lack
of certain safeguards. </span></p><p class="MsoNormal"><span style="font-family: georgia;">The main reason for the difference between the two judgments is that the Swedish surveillance regime did provide for initial authorisation of bulk warrants by an independent
Foreign Intelligence Court. It could not, therefore, be said (as it was for RIPA
in </span><i style="font-family: georgia;">BBW</i><span style="font-family: georgia;">) that the regime lacked independent authorisation at the outset (a minimum requirement that the Court has now described as a “fundamental
safeguard” that “should” be present ([377]). </span><span style="font-family: georgia;"> </span><span style="font-family: georgia;">The approach of the Court in </span><i style="font-family: georgia;">Rättvisa</i><span style="font-family: georgia;"> was
therefore of necessity more nuanced.</span></p>
<p class="MsoNormal"><b><span style="font-family: georgia;">Hard versus soft limits<o:p></o:p></span></b></p>
<p class="MsoNormal"><span style="font-family: georgia;">By contrast with the Grand Chamber’s holistic, multifactorial
approach, the <a href="https://www.cyberleagle.com/2020/10/hard-questions-about-soft-limits.html" target="_blank">EU Court of Justice</a> has moved in the direction of insisting on that the relevant legal instruments set out clear and precise <a href="https://www.cyberleagle.com/2016/09/a-trim-for-bulk-powers.html">hard limits</a> on powers. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">That contrast may to some extent reflect the different roles of the two courts. The
CJEU’s task is to lay down the content of substantive, positive EU law, within
the framework of the Charter of Fundamental Rights. The task of the ECtHR is
not to harmonise or lay down positive law (although when it ventures into the territory
of horizontal rights it comes perilously close to doing that), but to determine
whether a potentially wide variety of Contracting
State laws has strayed beyond the boundaries of Convention compatibility. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Although even the CJEU must allow for some differences
in Member State domestic laws, it is in principle able to be more prescriptive
than the ECtHR. </span></p><p class="MsoNormal"><span style="font-family: georgia;">At any rate, the ECtHR (confirmed by the Grand Chamber in the <i>BBW</i>
and <i>R</i></span><span style="font-family: georgia;"><i>ä</i></span><i style="font-family: georgia;">ttvisa</i><span style="font-family: georgia;"> cases) has taken a softer-edged approach, with greater stress
on safeguards than on the need for clear and precise limits on powers (emphasised by
the CJEU most recently in </span><i style="font-family: georgia;">Privacy International/La Quadrature</i><span style="font-family: georgia;">)</span><span style="font-family: georgia;">. Whether or
not that ultimately means a substantively stricter outcome than the CJEU's approach, it certainly makes for one
that is less predictable in terms of compliance with the Convention.</span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The ECtHR’s approach is exemplified by the set of compliance
criteria articulated by the Grand Chamber in <i>BBW</i> and <i>R</i></span><span style="font-family: georgia;"><i>ä</i></span><i style="font-family: georgia;">ttvisa</i><span style="font-family: georgia;">. It has laid down eight minimum criteria, compared with the six in </span><i style="font-family: georgia;">Weber/Saravia</i><span style="font-family: georgia;">,
to be considered in deciding whether a surveillance regime passes the initial
‘in accordance with the law’ test.</span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The criteria are that the Court will examine whether the
domestic framework clearly defines:<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">1. the grounds on which bulk interception may be authorised;<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">2. the circumstances in which an individual’s communications
may be intercepted;<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">3. the procedure to be followed for granting authorisation;<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">4. the procedures to be followed for selecting, examining
and using intercept material;<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">5. the precautions to be taken when communicating the
material to other parties;<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">6. the limits on the duration of interception, the storage
of intercept material and the circumstances in which such material must be
erased and destroyed;<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">7. the procedures and modalities for supervision by an
independent authority of compliance with the above safeguards and its powers to
address non-compliance;<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">8. the procedures for independent ex post facto review of
such compliance and the powers vested in the competent body in addressing
instances of non-compliance.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">These are framed as topic areas that have to be clearly
addressed in domestic law. They also imply some degree of minimum requirement:
for instance, domestic legislation that addressed the topic of limits on the
duration of interception by stating clearly that it may be unlimited would not
pass muster. Similarly, the factors connote some level of independent
supervision and review.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">However, what those implied minimum requirements might
amount to in practice is not easy to tell. The eight topics appear to be as
much – perhaps more so - criteria to be assessed, as a cumulative set of
threshold conditions to be surmounted. They
may have elements of both. The Court referred in its judgment to its ‘overall
assessment’ of the bulk interception regime, emphasising that shortcomings in
some areas may be compensated by safeguards in others. The Court may also take
into account factors beyond the eight minimum criteria, such as notification
provisions.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">In a separate Opinion Judge Pinto de Albuquerque pointed out
the ambiguity in the Grand Chamber’s judgment as to whether it was laying down factors
to be considered or mandatory requirements:<o:p></o:p></span></p>
<p class="MsoNormal" style="margin-left: 36pt;"><span style="font-family: georgia;">“On the one hand, it has used
imperative language (“should be made”, “should be subject”, “should be
authorised”, “should be informed”, “must be justified”, and “should be
scrupulously recorded”, “should also be subject”, “it is imperative that the
remedy should”) and has called them “fundamental safeguards” and even “minimum safeguards”.
But on the other hand, it has diluted these safeguards in “a global assessment
of the operation of the regime”, allowing for a trade-off among the safeguards.
It seems that at the end of the day each individual safeguard is not mandatory,
and the prescriptive language of the Court does not really correspond to
non-negotiable features of the domestic system.”<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">That said, the Court went on to lay down what it described
as the “fundamental safeguards” that would be the cornerstone of an Article
8-compliant bulk interception regime ([350]). This was articulated in the context of the
particular model presented to the court (collection, filtering to discard
unwanted material, automated application of selectors and search queries,
manual queries by analysts, examination by analysts, subsequent retention and use), which
the Court regarded as involving increasing interferences with privacy as the
process progressed. ([325]) . This model already feels somewhat old-fashioned, given the more sophisticated pattern-matching and other techniques that could be applied to analysis of, in particular, bulk communications data. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The Court's requirements are that the process must be subject to
end-to-end safeguards, meaning that: </span></p><ul style="text-align: left;"><li><span style="font-family: georgia;">A</span><span style="font-family: georgia;">t each stage of the process an assessment must be made of the necessity and proportionality of the measures being taken. [350] <br /><br /></span></li><li><span style="font-family: georgia;">Bulk interception should be subject to independent authorisation at the outset, when the object and scope of the operation are being defined [351] <br /><br /></span></li><li><span style="font-family: georgia;">The operation should be subject to supervision and independent ex post facto review [350]</span></li></ul><p class="MsoNormal"><span style="font-family: georgia;">The Court commented that the importance of supervision and
review is amplified compared with targeted interception because of the inherent
risk of abuse and the legitimate need for secrecy [349].<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Drilling down further into those fundamental safeguards, the
Court observed that:<o:p></o:p></span></p>
<p class="MsoListParagraphCxSpFirst"></p><ul style="text-align: left;"><li><span style="font-family: georgia;">The independent authorising body should be informed of both the purpose of the interception and the bearers or communication routes likely to be intercepted. [352]</span></li></ul><span style="font-family: georgia;"><ul style="text-align: left;"><li><span style="font-family: georgia;">Given that the choice of selectors and query terms determines which communications will be eligible for examination by an analyst, the authorisation should at the very least identify the types or categories of selectors to be used. The Court accepted that the inclusion of all selectors in the authorisation may not be feasible in practice. [354]</span></li></ul><ul style="text-align: left;"><li><span style="font-family: georgia;">Enhanced safeguards should be in place for strong selectors linked to identifiable individuals. The use of every such selector must be justified by the intelligence services and that justification should be scrupulously recorded and be subject to a process of prior internal authorisation providing for separate and objective verification of whether the justification conforms to the principles of necessity and proportionality. [355]</span></li></ul><ul style="text-align: left;"><li><span style="font-family: georgia;">Each stage of the bulk interception process – including the initial authorisation and any subsequent renewals, the selection of bearers, the choice and application of selectors and query terms, and the use, storage, onward transmission and deletion of the intercept material – should be subject to supervision by an independent authority. That supervision should be sufficiently robust to keep the interference with Art 8 rights to what is “necessary in a democratic society”. In order to facilitate supervision, detailed records should be kept by the intelligence services at each stage of the process. [356]</span></li></ul><ul style="text-align: left;"><li><span style="font-family: georgia;">Finally, an effective remedy should be available to anyone who suspects that his or her communications have been intercepted by the intelligence services, either to challenge the lawfulness of the suspected interception or the Convention compliance of the interception regime. A remedy that does not depend on notification to the interception subject can be effective. But it is then imperative that the remedy should be before a body which, while not necessarily judicial, is independent of the executive and ensures the fairness of the proceedings, offering, in so far as possible, an adversarial process. The decisions of such authority shall be reasoned and legally binding with regard, inter alia, to the cessation of unlawful interception and the destruction of unlawfully obtained and/or stored intercept material. [357]</span></li></ul></span><p></p>
<p class="MsoNormal"><span style="font-family: georgia;">The court also provided guidance on sharing intercept
material with agencies in other countries.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">In the light of the above, the Court will determine whether
a bulk interception regime is Convention compliant by conducting a global
assessment of the operation of the regime. Such assessment will focus primarily
on whether the domestic legal framework contains sufficient guarantees against
abuse, and whether the process is subject to “end-to-end safeguards”. In doing
so, the Court will have regard to the actual operation of the system of interception,
including the checks and balances on the exercise of power, and the existence
or absence of any evidence of actual abuse. [360]<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The Court also observed that it was not persuaded that the
acquisition of related communications data through bulk interception is
necessarily less intrusive than the acquisition of content. It therefore
considered that the interception, retention and searching of related
communications data should be analysed by reference to the same safeguards as
those applicable to content. [363]<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">That said, the Court observed that while the interception of related
communications data would normally be authorised at the same time the
interception of content is authorised, once obtained they could permissibly be
treated differently by the intelligence services. </span></p><p class="MsoNormal"><span style="font-family: georgia;">In view of the different
character of related communications data and the different ways in which they
are used by the intelligence services, as long as the aforementioned safeguards were in place, the legal provisions governing
their treatment did not necessarily have to be identical in every respect to
those governing the treatment of content. [364]<o:p></o:p></span></p>
<p class="MsoNormal"><b><span style="font-family: georgia;">Implications for the Investigatory Powers Act 2016<o:p></o:p></span></b></p>
<p class="MsoNormal"><span style="font-family: georgia;">Where does this leave the 2016 Act? The Act ticks several
important boxes, notably the “double lock” system of approval of bulk warrants
by a Judicial Commissioner introduced after the end of the RIPA regime. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">When considering the Convention compliance of the IP Act
regime the <i>R</i></span><span style="font-family: georgia;"><i>ä</i></span><i style="font-family: georgia;">ttvisa</i><span style="font-family: georgia;"> decision is probably more factually relevant than
the </span><i style="font-family: georgia;">BBW</i><span style="font-family: georgia;"> decision, since it addresses a regime that featured initial authorisation by an independent court.</span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The IP Act in some respects provides stronger safeguards than
those that fell short in <i>R</i></span><span style="font-family: georgia;"><i>ä</i></span><i style="font-family: georgia;">ttvisa</i><span style="font-family: georgia;"> – thus the UK IPT was held up as an example
of what was possible in the area of </span><i style="font-family: georgia;">ex post facto</i><span style="font-family: georgia;"> review.</span></p>
<p class="MsoNormal"><span style="font-family: georgia;">On the other hand, the Swedish regime provided for mandatory
presence of a privacy protection representative at Foreign
Intelligence Court sessions. That was identified as a relevant safeguard to be weighed against the
fact that the Court had never held a public hearing and that all its decisions
were confidential. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">There is no provision in the IP Act for a privacy protection
representative to make submissions in the bulk warrant approval process. As to
publicising bulk warrant approval decisions, in his April 2018 <a href="https://ipco-wpmedia-prod-s3.s3.eu-west-2.amazonaws.com/20180403-IPCO-Guidance-Note-2.pdf" target="_blank">Advisory Notice</a> the Investigatory Powers
Commissioner said:<o:p></o:p></span></p>
<p class="MsoNormal" style="margin-left: 36pt;"><span style="font-family: georgia;">“The Judicial Commissioners will
consider making any decisions on approvals public, subject to any statutory
limitations and necessary redactions.”<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">It is noteworthy that the latest <a href="https://ipco-wpmedia-prod-s3.s3.eu-west-2.amazonaws.com/IPC-Annual-Report-2019_Web-Accessible-version_final.pdf">Annual Report</a> of the
Investigatory Powers Commissioner (for 2019) records that a Judicial
Commissioner issued the first approvals of a communications data retention notice
regarding internet connection records. It also describes a potential obstacle to approval of warrants posed by MI5's IT issues. Whilst this evinces a degree of openness, it does not go as far as (for instance) a practice of publishing
Judicial Commissioner decisions on points of legal interpretation.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Given the multifactorial, trade-off-oriented approach of the
Grand Chamber it is impossible to be categoric about whether this aspect of the
IP Act regime presents Convention compliance problems. On the basis of <i>R</i></span><span style="font-family: georgia;"><i>ä</i></span><i style="font-family: georgia;">ttvisa</i><span style="font-family: georgia;"> we can expect, however, that it will be argued that either a privacy (and freedom of expression?) representative should be able to make
submissions in the bulk warrant approval decision-making process, or the possibility
of publishing elements of bulk warrant approval decisions should be explored
further, or perhaps both.</span></p>
<p class="MsoNormal"><span style="font-family: georgia;">As for the double-lock procedure itself, although the
Secretary of State remains the primary decision-maker, and it is occasionally
suggested that Judicial Commissioner approval, being based on judicial review
principles, falls short of full scrutiny, it should not be forgotten
that the <a href="https://ipco-wpmedia-prod-s3.s3.eu-west-2.amazonaws.com/20180403-IPCO-Guidance-Note-2.pdf" target="_blank">Advisory Notice</a> issued by the IPC in April 2018 stated that the Judicial Commissioners would not apply the relatively hands-off ‘Wednesbury reasonableness’
test, but instead the judicial review test applied by the domestic courts when
considering interferences with fundamental rights. That would be
taken into account in any assessment of the level of scrutiny applied to warrants.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Another area of the IP Act that is likely to attract attention is the IP Act's bulk communications data acquisition warrant. This is the successor to S.94
of the Telecommunications Act 1984, which the government admitted in November
2015 had been used for bulk acquisition of communications data from
communications service providers. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Unlike bulk interception under RIPA (and now under the
IP Act), the bulk communications acquisition warrant is not focused on foreign
intelligence purposes. Given the various references in the <i>BBW</i> and <i>R</i></span><span style="font-family: georgia;"><i>ä</i></span><i style="font-family: georgia;">ttvisa</i><span style="font-family: georgia;">
judgments to bulk interception being primarily used for foreign intelligence, and
the acknowledgment that bulk communications data should not be regarded as less
sensitive than content, the Convention compliance of a domestic bulk acquisition
regime may fall to be considered in the future.</span></p>
<p class="MsoNormal"><span style="font-family: georgia;">A potential problem area, both for bulk interception
and communications data acquisition, is journalistic privilege. Although the IP
Act contains stronger protections for journalistic material than did RIPA, it may be questioned whether those, at least of themselves, are sufficient to meet the criticisms
contained in the two ECtHR judgments.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Returning to the central theme of the Grand Chamber judgments, does the IP Act provide sufficient end-to-end safeguards over the bulk interception process? Following the Chamber judgment in 2018 I suggested that since the 2016 Act did not
spell out whether end to end oversight was applied to all stages of the bulk
interception process, more would need to be done to fill that gap (remembering
that it is not enough for that simply to be done – it must be required to be done by
means of clearly stated public rules.) That view is reinforced by the Grand
Chamber judgment. I can do no better than repeat <a href="https://www.cyberleagle.com/2018/10/what-will-be-in-investigatory-powers.html">what I said then</a>: <o:p></o:p></span></p>
<p class="MsoNormal" style="margin-left: 36pt;"><span style="font-family: georgia;">“Beyond that, under the IP Act
the Judicial Commissioners have to consider at the warrant approval stage the
necessity and proportionality of conduct authorised by a bulk warrant. Arguably
that includes all four stages identified by the Strasbourg Court (see my
<a href="https://www.ipco.org.uk/docs/Comments%20to%20IPCO%20Bulk%20Powers%20Proportionality.pdf">submission to IPCO</a> earlier this year). If that is right, the RIPA gap may have
been partially filled. <o:p></o:p></span></p>
<p class="MsoNormal" style="margin-left: 36pt;"><span style="font-family: georgia;">However, the IP Act does not
specify in terms that selectors and search criteria have to be reviewed.
Moreover, focusing on those particular techniques already seems faintly
old-fashioned. The Bulk Powers Review reveals the extent to which more
sophisticated analytical techniques such as anomaly detection and pattern
analysis are brought to bear on intercepted material, particularly
communications data. Robust end to end oversight ought to cover these
techniques as well as use of selectors and automated queries. <o:p></o:p></span></p>
<p class="MsoNormal" style="margin-left: 36pt;"><span style="font-family: georgia;">The remainder of the gap could
perhaps be filled by an explanation of how closely the Judicial Commissioners
oversee the various selection, searching and other analytical processes. <o:p></o:p></span></p>
<p class="MsoNormal" style="margin-left: 36pt;"><span style="font-family: georgia;">Filling this gap may not
necessarily require amendment of the IP Act, although it would be preferable if
it were set out in black and white. It could perhaps be filled by an IPCO
advisory notice: first as to its understanding of the relevant requirements of
the Act; and second explaining how that translates into practical oversight, as
part of bulk warrant approval or otherwise, of the end to end stages involved
in bulk interception (and indeed the other bulk powers).”<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The case for the gap to be filled formally is reinforced when
we consider that the government has publicly referred to discussions that have
been taking place with IPCO to strengthen end to end supervision in practice. The
Grand Chamber judgment records the government’s argument that:<o:p></o:p></span></p>
<p class="MsoNormal" style="margin-left: 36pt;"><span style="font-family: georgia;">“Robust independent oversight of
selectors and search criteria was therefore within the IC Commissioner’s
powers: by the time of his 2014 report he had specifically put in place systems
and processes to make sure that actually occurred, and, following the Chamber
judgment, the Government had been working with the IC Commissioner’s Office to
ensure that there would be enhanced oversight of selectors and search criteria
under IPA.”<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">In his Annual Report for 2019 (published in December 2020)
the Investigatory Powers Commissioner stated:<o:p></o:p></span></p>
<p class="MsoNormal" style="margin-left: 36pt;"><span style="font-family: georgia;">“Our oversight of bulk powers has
evolved over the past year (see para 10.27). This reflected the European Court
of Human Right’s judgment in the <i>Big Brother Watch and others v UK</i> case,
and the Intelligence and Security Committee’s (ISC) Privacy and Security Report
of March 2015.We reviewed our approach to inspecting bulk interception in 2019,
considering the technically complex ways in which bulk interception is
implemented and from 2020 our inspections will include a detailed examination
of selectors and search criteria.”<o:p></o:p></span></p>
<span style="line-height: 115%;"><span style="font-family: georgia;">Now that we have the Grand
Chamber judgment the case appears to be stronger for the end to end
oversight arrangements, and IPCO’s interpretation of the 2016 Act in that regard, to be
spelled out publicly. That would also be well timed for the forthcoming review
of the operation of the 2016 Act that is required to start in a year’s time.</span></span><div><span style="font-family: georgia;"><br /></span><div><span style="line-height: 115%;"><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjhgh8eo6maeGTSI5fayB3eUuHc1mydlHZDJtZV3vpclRxeGlclFzfFgfCJE0fjVIJ2IzW9cW7rb7LrZtfBu87FGWstRbiksP8FJMsGA6lpKr9CFDWBBxq71caXhs3bLfIay-jpQPO9wHtv/s135/snip2.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="134" data-original-width="135" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjhgh8eo6maeGTSI5fayB3eUuHc1mydlHZDJtZV3vpclRxeGlclFzfFgfCJE0fjVIJ2IzW9cW7rb7LrZtfBu87FGWstRbiksP8FJMsGA6lpKr9CFDWBBxq71caXhs3bLfIay-jpQPO9wHtv/s0/snip2.png" /></a></div><br /><span style="font-family: georgia;"><br /></span></span></div></div>Cyberleaglehttp://www.blogger.com/profile/17507190182464072147noreply@blogger.com0tag:blogger.com,1999:blog-229721367671779922.post-48187723819492113512021-05-16T13:10:00.003+01:002021-05-22T09:10:38.868+01:00Harm Version 3.0: the draft Online Safety Bill<p class="MsoNormal"><span style="font-family: georgia;">Two years on from the April 2019 <a href="https://www.gov.uk/government/consultations/online-harms-white-paper" target="_blank">Online Harms White Paper</a>,
the government has published its <a href="https://www.gov.uk/government/publications/draft-online-safety-bill" target="_blank">draft Online Safety Bill</a>. It is a hefty beast:
133 pages and 141 sections. It raises a slew of questions, not least around
press and journalistic material and the newly-coined “content of democratic
importance”. Also, for the first time, the draft Bill spells out how the duty
of care regime would apply to search engines, not just to user generated content sharing service providers.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">This post offers first impressions of a central issue that
started to take final shape in the government’s December 2020 <a href="https://www.gov.uk/government/consultations/online-harms-white-paper/outcome/online-harms-white-paper-full-government-response" target="_blank">Full Response</a> to consultation: the apparent
conflict between imposing content monitoring and removal obligations
on the one hand, and the government’s oft-repeated commitment to freedom of
expression on the other - now translated into express duties on service
providers. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">That issue overlaps with a question that has dogged the
Online Harms project from the outset: what does it mean by safety and harm? The answer shapes the potential impact of the
legislation on freedom of expression. The broader and vaguer the notion of harm,
the greater the subjectivity involved in complying with the duty of care, and
the greater the consequent dangers for online users' legitimate speech. </span></p><p class="MsoNormal"><span style="font-family: georgia;">The draft Bill represents the government's third attempt at defining harm (if we include the White Paper, which set no limit). The scope of harm proposed
in its second version (the Full Response) has now been significantly widened</span><span style="font-family: georgia;">. </span></p><p class="MsoNormal"><span style="font-family: georgia;">For </span><span style="font-family: georgia;">legal but harmful content</span><span style="font-family: georgia;"> t</span><span style="font-family: georgia;">he government apparently now means to set an overall backstop limitation of "physical or psychological harm", but whether the draft Bill achieves that is doubtful. In any event that would still be broader than the general definition of harm proposed in the Full Response: </span><span style="font-family: georgia;">a “reasonably foreseeable risk of a significant adverse physical or psychological impact on individuals”. For illegal content the Full Response's general definition would not apply; and the new backstop definition would have only limited relevance. </span></p><p class="MsoNormal"><o:p></o:p></p>
<p class="MsoNormal"><b><span style="font-family: georgia;">Moderation and filtering duties<o:p></o:p></span></b></p>
<p class="MsoNormal"><span style="font-family: georgia;">If one provision can be said to lie at the heart of the
draft Bill, it is section 9(3). This describes duties that will apply to all the
estimated 24,000 in-scope service providers. It is notable that pre-Brexit, duties (a) to (c) would have fallen foul of the ECommerce Directive's <a href="https://www.cyberleagle.com/2017/05/time-to-speak-up-for-article-15.html" target="_blank">Article 15</a> ban on imposing general monitoring obligations on hosting
providers. Section 9(3) thus departs from 20 years of EU and UK policy aimed at protecting the freedom of expression and privacy of online users. </span></p><p class="MsoNormal"><span style="font-family: georgia;">Section 9(3) imposes: <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">“<i>A duty to operate a service using proportionate systems
and processes designed to—<o:p></o:p></i></span></p>
<p class="MsoNormal" style="text-indent: 36pt;"><i><span style="font-family: georgia;">(a) minimise the presence of
priority illegal content;<o:p></o:p></span></i></p>
<p class="MsoNormal" style="text-indent: 36pt;"><i><span style="font-family: georgia;">(b) minimise the length of time
for which priority illegal content is present;<o:p></o:p></span></i></p>
<p class="MsoNormal" style="text-indent: 36pt;"><i><span style="font-family: georgia;">(c) minimise the dissemination
of priority illegal content;<o:p></o:p></span></i></p>
<p class="MsoNormal" style="margin-left: 36pt;"><span style="font-family: georgia;"><i>(d) where the provider is
alerted by a person to the presence of any illegal content, or becomes aware of
it in any other way, swiftly take down such content.</i>”<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Duty (d) approximately parallels the <a href="https://www.cyberleagle.com/2018/04/the-electronic-commerce-directive.html" target="_blank">hosting liability shield</a> in the ECommerce Directive, but cast in terms of a <a href="https://www.techdirt.com/articles/20200824/13595845171/intermediary-liability-responsibilities-post-brexit-graham-smith.shtml" target="_blank">positive regulatory obligation</a> to operate take down processes, rather than potential exposure to liability for a user's content should the shield be disapplied on gaining knowledge of its illegality. </span></p><p class="MsoNormal"><span style="font-family: georgia;">As is typical of regulatory legislation, the draft Bill is not a finished work. It is more like a preliminary drawing intended to be filled out later. For instance, the extent of </span><span style="font-family: georgia;">the proactive moderation and filtering obligations implicit in Section 9(3) depends on </span><span style="font-family: georgia;">what constitutes ‘priority
illegal content’. That is not set out in the draft Bill, but would be designated in secondary legislation prepared by the Secretary of State. </span><span style="font-family: georgia;">The
same holds for ‘priority content that is harmful to adults’, and for a parallel category relating to children,
which underpin other duties in the draft Bill. </span></p><p class="MsoNormal"><span style="font-family: georgia;">Since Section 9(3) and the other
duties make no sense without the various kinds of priority content first being designated, regulations
would presumably have to be made before the legislation can come into force. The breadth of the Secretary of State's discretion in designating priority content is discussed below.</span></p><p class="MsoNormal"><span style="font-family: georgia;">If secondary legislation is a layer of detail applied to the preliminary drawing a further layer, yet more detailed, will consist of codes of practice, guidance and risk profiles for different kinds of</span><span style="font-family: georgia;"> service, all issued by Ofcom.</span></p><p class="MsoNormal"><span style="font-family: georgia;">This regulatory vessel would be pointed in the government's desired direction by a statement of strategic online safety priorities issued by the Secretary of State, to which Ofcom would be required to have regard. The statement could set out particular outcomes. The Secretary of State would first have to consult with Ofcom, then lay the draft before Parliament so as to give either House the opportunity to veto it. </span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The moderation and filtering obligations implicit in Section
9(3) and elsewhere in the draft Bill would take the lion’s share – £1.7bn </span><span style="font-family: georgia;">–</span><span style="font-family: georgia;"> of
the £2.1bn that the government’s <a href="https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/985283/Draft_Online_Safety_Bill_-_Impact_Assessment_Web_Accessible.pdf">Impact Assessment</a> reckons in-scope providers will
have to spend on complying with the legislation over the first 10 years. Moderation
is expected to be both technical and human:</span></p>
<p class="MsoNormal" style="margin-left: 36pt;"><span style="font-family: georgia;">“…it is expected that undertaking
additional content moderation (through hiring additional content moderators or
using automated moderation) will represent the largest compliance cost faced by
in-scope businesses.” (Impact Assessment [166])<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Additional moderation costs are expected to be incurred in greater proportion by the largest (Category 1) providers: 7.5% of
revenue for Category 1 organisations and 1.9% for all other in-scope
organisations (Impact Assessment [180]). That presumably reflects the
obligations specific to Category 1 providers in relation to legal but 'harmful
to adults’ content.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;"><b>Collateral damage to legitimate speech</b><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Imposition of moderation and filtering obligations,
especially at scale, raises the twin spectres of interference with users’
privacy and collateral damage to legitimate speech. The danger to legitimate
speech arises from misidentification of illegal or harmful content and lack of clarity about what is illegal or harmful. Incidence
of collateral damage resulting from imposition of such duties is likely to be affected
by: <o:p></o:p></span></p>
<p class="MsoListParagraphCxSpFirst" style="mso-list: l1 level1 lfo2; text-indent: -18pt;"></p><ul style="text-align: left;"><li><span style="font-family: georgia;"><span style="font-stretch: normal; font-variant-east-asian: normal; font-variant-numeric: normal; line-height: normal;"> </span>The <b>proof threshold</b> that triggers the
duty. The lower the standard to which a service provider has to be satisfied of
illegality or harm, the greater the likelihood of erroneous removal or
inhibition.</span></li><li><span style="font-family: georgia;"><span style="font-stretch: normal; font-variant-east-asian: normal; font-variant-numeric: normal; line-height: normal;"> </span><b>Scale</b>. The greater the scale at which
moderation or filtering is carried out, the less feasible it is to take account
of individual context. Even for illegal content the assessment of illegality will, for many kinds of illegality, be context-sensitive.</span></li><li><span style="font-family: georgia;"><span style="font-stretch: normal; font-variant-east-asian: normal; font-variant-numeric: normal; line-height: normal;"> </span><b>Subjectivity</b> of the harm.
If harm depends upon the subjective perception of the reader, a harm standard according to
the most easily offended reader may develop. </span></li><li><span style="font-family: georgia;"><span style="font-stretch: normal; font-variant-east-asian: normal; font-variant-numeric: normal; line-height: normal;"> <b>Vagueness.</b> </span><!--[endif]-->If the kind of harm is so <b>vaguely </b>defined
that no sensible line can be drawn between identification and misidentification,
then collateral damage is hard-wired into the regime. </span></li><li><span style="font-family: georgia;"><span style="font-stretch: normal; font-variant-east-asian: normal; font-variant-numeric: normal; line-height: normal;"> </span><!--[endif]--><b>Scope of harm</b> The broader the scope of
harm to which a duty applies, the more likely that it will include subjective
or vague harms.</span></li></ul><!--[if !supportLists]--><p></p>
<p class="MsoNormal"><span style="font-family: georgia;">Against these criteria, how does the draft Bill score on the
collateral damage index? <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;"><b>Proof threshold</b> S.41 defines
illegal content as content where the service provider has reasonable grounds to
believe that use or dissemination of the content amounts to a relevant criminal
offence. Illegal content does not have to be definitely illegal in order for the section 9(3) duties to apply. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The <b>scale</b> of the required moderation and filtering is
apparent from the Impact Assessment.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The <b>scope of harm</b> has swung back and forth. Version 1.0 was contained in the
government’s April 2019 White Paper. It encompassed the <b>vaguest</b> and most <b>subjective</b> kinds of harm. Most
kinds of illegality were within scope. For harmful but legal content there was no
limiting definition of harm. Effectively, the proposed regulator (now
Ofcom) could have deemed what is and is not harmful. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">By the time of its Full Consultation Response in December
2020 the government had come round to the idea that the proposed duty of care should
relate only to defined kinds of harm, whether they arose from illegal user content or user content that was legal but harmful [Full Response 2.24]. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">In this Version 2.0, harm would consist of a “reasonably foreseeable risk of a
significant adverse physical or psychological impact on individuals”. A criminal offence would therefore be in scope of a
provider’s duty of care only if the offence presented that kind of risk. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Although <a href="https://www.cyberleagle.com/2020/02/an-online-harms-compendium.html">still problematic</a> in retaining some elements of
subjectivity through inclusion of psychological impact, the Full Response proposal
significantly shifted the focus of the duty of care towards personal safety
properly so-called. It was thus more closely aligned to <a href="https://www.cyberleagle.com/2018/10/take-care-with-that-social-media-duty.html" target="_blank">the subject matter of comparable offline duties of care</a>.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;"><b>Version 3.0 </b></span><span style="font-family: georgia;">The draft Bill states as a general definition that "harm" means "physical or psychological harm". This is an attenuated version of the general definition proposed in the Full Response. However, the draft Bill does not stipulate that 'harmful' should be understood in the same limited way. The result of that omission, combined with other definitions, could be to give the Secretary of State regulation-making powers for legal but harmful content that are, on the face of them, not limited to </span><span style="font-family: georgia;">physical or psychological harm. </span></p><p class="MsoNormal"><span style="font-family: georgia;">This may well not be the government's intention. When giving evidence to the Culture Media and Sport Commons Committee last week, Secretary of State Oliver Dowden stressed (<a href="https://parliamentlive.tv/event/index/996fa6d7-21bf-4706-8a6d-aa6be6c1c243?in=14:57:00&out=15:05:02">14:57 onwards</a>) that regulations would not go beyond physical or psychological harm. This could usefully be explored during pre-legislative scrutiny. </span></p>
<p class="MsoNormal"><span style="font-family: georgia;">For <b>legal but harmful</b> content the draft Bill does provide a more developed version of the Full Response’s general definition of harm,
tied to impact on a hypothetical adult or child "of ordinary sensibilities". This is evidently an attempt to <a href="https://www.cyberleagle.com/2019/06/speech-is-not-tripping-hazard-response.html">inject some objectivity</a> into the assessment of harm. It then adds further layers addressing impact on members of particularly affected groups or particularly affected people with certain characteristics (neither specified), impact on a specific person about whom the service provider knows, and </span><span style="font-family: georgia;">indirect impact</span><span style="font-family: georgia;">. These provisions will undoubtedly attract close scrutiny. </span><span style="font-family: georgia;"> </span></p>
<p class="MsoNormal"><span style="font-family: georgia;">In any event, this complex definition of harm does not have universal application within the draft Bill. It governs only a residual category of content outside the
Secretary of State’s designated descriptions of <b>priority harmful content</b> for adults and children.
The longer the Secretary of State's lists of priority harmful content designated in secondary legislation, the less ground in principle would be covered by content to which the complex definition applies. The Secretary of State is not constrained by the complex definition when designating priority harmful content. </span></p><p class="MsoNormal"><span style="font-family: georgia;">Nor, on the face of it, is the Secretary of State limited to physical or psychological harm. However, as already flagged, that may well not represent the intention of the government. </span><span style="font-family: georgia;">That omission would be all the more curious, given that </span><span style="font-family: georgia;">Ofcom has a consultation and recommendation role in the regulation-making process, and the simple definition </span><span style="font-family: georgia;">– physical or psychological harm - does constrain Ofcom’s recommendation remit. </span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The Secretary of State has a parallel power to designate <b>priority
illegal content</b> (which underpins the section 9(3) duties above) by
secondary legislation. He cannot include offences relating to:<o:p></o:p></span></p>
<p class="MsoListParagraphCxSpFirst" style="margin-left: 18pt; mso-add-space: auto; mso-list: l0 level1 lfo1; text-indent: -18pt;"><!--[if !supportLists]--><span style="font-family: georgia;">-<span style="font-stretch: normal; font-variant-east-asian: normal; font-variant-numeric: normal; line-height: normal;">
</span><!--[endif]-->Infringement of intellectual property rights<o:p></o:p></span></p>
<p class="MsoListParagraphCxSpMiddle" style="margin-left: 18pt; mso-add-space: auto; mso-list: l0 level1 lfo1; text-indent: -18pt;"><!--[if !supportLists]--><span style="font-family: georgia;">-<span style="font-stretch: normal; font-variant-east-asian: normal; font-variant-numeric: normal; line-height: normal;">
</span><!--[endif]-->Safety or quality of goods (as opposed to what
kind of goods they are)<o:p></o:p></span></p>
<p class="MsoListParagraphCxSpLast" style="margin-left: 18pt; mso-add-space: auto; mso-list: l0 level1 lfo1; text-indent: -18pt;"><!--[if !supportLists]--><span style="font-family: georgia;">-<span style="font-stretch: normal; font-variant-east-asian: normal; font-variant-numeric: normal; line-height: normal;">
</span><!--[endif]-->Performance of a service by a person not
qualified to perform it<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">In considering whether to designate an offence the Secretary
of State does have to take into account, among other things, the level of risk
of harm being caused to individuals in the UK by the presence of content that
amounts to the offence, and the severity of that harm. Harm here does mean physical
or psychological harm. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">As with harmful content, illegal content includes a residual
category designed to catch illegality neither specifically identified in the draft
Bill (terrorism and CSEA offences) nor designated in secondary legislation as
priority illegal content. This category consists of “Other offences of which the victim or intended
victim is an individual (or individuals).” This, while confined to individuals, is not limited to physical or psychological harm. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The first round of secondary legislation designating
categories of priority illegal and harmful content would require affirmative
resolutions of each House of Parliament. Subsequent regulations would be
subject to negative resolution of either House.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">To the extent that the government’s rowback from the general definition of harm contained in the Full Response enables more <b>vague and subjective</b>
kinds of harm to be brought back into scope of service provider duties, the
risk of collateral damage to legitimate speech would correspondingly increase. <o:p></o:p></span></p>
<p class="MsoNormal"><b><span style="font-family: georgia;">Internal contradictions</span></b></p><p class="MsoNormal"><span style="font-family: georgia;"></span></p><p class="MsoNormal"><span style="font-family: georgia;">The draft Bill lays down various risk assessments that in-scope providers must undertake, taking into account a ‘risk profile’ of that kind of service prepared by Ofcom and to be included in its guidance about risk assessments.</span></p><p class="MsoNormal"><span style="font-family: georgia;">As well as the Section 9(3) moderation and filtering duties
set out above, for illegal content a service provider would be under a duty to
take proportionate steps to mitigate and effectively manage the risks of harm
to individuals, as identified in the service’s most recent illegal content risk
assessment.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">In parallel to these duties, the service provider is placed
under a duty to have regard to the importance of protecting users’ right to
freedom of expression within the law when deciding on, and implementing, safety
policies and procedures.</span></p><p class="MsoNormal"><span style="font-family: georgia;">However, since the very duties imposed by the draft Bill create a </span><span style="font-family: georgia;">risk of collateral damage to legitimate speech, a conflict between duties is inevitable</span><span style="font-family: georgia;">. The potential for conflict increases with the scope of the duties and the breadth and subjectivity of their subject matter. </span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The government has acknowledged the risk of
collateral damage in the context of Category 1 services, which would be subject
to duties in relation to lawful content harmful to adults in addition to the
duties applicable to ordinary providers.</span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Category 1 service providers would have to prepare
assessments of their impact on freedom of expression and (as interpreted by the government's <a href="https://www.gov.uk/government/news/landmark-laws-to-keep-children-safe-stop-racial-hate-and-protect-democracy-online-published">launch announcement</a>) demonstrate that they
have taken steps to mitigate any adverse effects. The government commented: <o:p></o:p></span></p>
<p class="MsoNormal" style="margin-left: 36pt;"><span style="font-family: georgia;">“These measures remove the risk
that online companies adopt restrictive measures or over-remove content <i>in
their efforts to meet their new online safety duties</i>. An example of this
could be AI moderation technologies falsely flagging innocuous content as harmful,
such as satire.” (emphasis added)<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">This passage acknowledges the danger inherent in the legislation: that efforts to comply with the duties imposed by the legislation would carry a risk of collateral damage by over-removal. That is true not only of ‘legal but harmful’ duties, but also of the moderation and filtering duties in relation to illegal content that would be imposed on all providers.</span></p><p class="MsoNormal"><span style="font-family: georgia;">No obligation to conduct a
freedom of expression risk assessment could </span><i style="font-family: georgia;">remove </i><span style="font-family: georgia;">the risk</span><i style="font-family: georgia;"> </i><span style="font-family: georgia;">of
collateral damage by over-removal. That smacks of faith in the
existence of a tech magic wand. Moreover, it does not reflect the uncertainty and subjective judgement inherent in evaluating user content, however great the resources thrown at it. </span></p><p class="MsoNormal"><span style="font-family: georgia;">Internal conflicts between duties, </span><span style="font-family: georgia;">underpinned by the Version 3.0 approach to the notion of harm,</span><span style="font-family: georgia;"> sit at the heart of the draft Bill. For that reason, despite the government’s protestations to the contrary, the draft Bill will inevitably continue to attract criticism as - to use </span><a href="https://www.gov.uk/government/speeches/oliver-dowdens-opinion-piece-for-the-telegraph-on-the-online-safety-bill" style="font-family: georgia;" target="_blank">the Secretary of State's words</a><span style="font-family: georgia;"> - a censor’s charter. </span></p><div> </div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjhgh8eo6maeGTSI5fayB3eUuHc1mydlHZDJtZV3vpclRxeGlclFzfFgfCJE0fjVIJ2IzW9cW7rb7LrZtfBu87FGWstRbiksP8FJMsGA6lpKr9CFDWBBxq71caXhs3bLfIay-jpQPO9wHtv/s135/snip2.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="134" data-original-width="135" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjhgh8eo6maeGTSI5fayB3eUuHc1mydlHZDJtZV3vpclRxeGlclFzfFgfCJE0fjVIJ2IzW9cW7rb7LrZtfBu87FGWstRbiksP8FJMsGA6lpKr9CFDWBBxq71caXhs3bLfIay-jpQPO9wHtv/s0/snip2.png" /></a></div><br /><p><br /></p>Cyberleaglehttp://www.blogger.com/profile/17507190182464072147noreply@blogger.com0tag:blogger.com,1999:blog-229721367671779922.post-65449274457183531562021-04-06T08:43:00.003+01:002021-04-07T04:27:50.605+01:00Seriously annoying tweets<span style="font-family: georgia;">The row over Section 59 of the Police, Crime, Sentencing and Courts Bill is reminiscent of a backwater pond that has lain undisturbed for years. Then someone decides to poke a stick in it and all manner of noxious fumes are released.<br /><br />In this instance the pond is the common law offence of public nuisance. The stick that has disturbed it is the government’s proposal to replace the common law offence with a statutory codification. The noxious fume that has been released is the risk of criminalising legitimate public protest. <br /><br />Section 59 would replace the common law public nuisance offence with a statutory equivalent. The new offence would consist of intentionally or recklessly causing serious harm, or a risk of serious harm, to the public or a section of the public. Such harm would include “serious distress, serious annoyance, serious inconvenience or serious loss of amenity”. There would be a defence of reasonable excuse. “Serious annoyance”, in particular, has been criticised as overly broad. <br /><br />Section 59 is situated in the Public Order part of the Bill: a collection of provisions about policing public demonstrations. But Section 59 is not limited to behaviour in the street. In impeccably technology-neutral fashion it would apply to any "act". Posting a tweet is as much an "act" as gluing oneself to the road. <br /><br />Criticism of Section 59 has focused on its <a href="https://rozenberg.substack.com/p/more-than-a-nuisance" target="_blank">potential for affecting street protests</a>. Little attention has been paid to </span><span style="font-family: georgia;">online communications. How would “serious annoyance” translate from street to tweet? Is a seriously annoying tweet the same kind of thing as a seriously annoying street protest? Is the potential impact of the Section 59 offence greater, less or no different in the online rather than the physical environment? Spoiler alert: it is at least the same and probably greater. How much greater we can only guess at – reason enough to send Section 59 back to the drawing board.</span><div><div><span style="font-family: georgia;"><br /><b>Origin of Section 59 </b><br /><br />The official response to concerns about Section 59 is that there is nothing to see here: it merely implements the Law Commission’s 2015 recommendations for codification of the common law public nuisance offence. The Secretary of State for Justice during the Second Reading of the Bill described concerns about “annoyance” as a “canard” (see further below). <br /><br />It does appear that the Law Commission’s recommendations excited no public controversy at the time. Its consultation paper attracted a total of 10 responses on the public nuisance offence, none of which opposed its overall proposals. <br /><br />However, for at least two reasons things are not that simple. First, the Law Commission did not discuss how the offence might apply to public online communications. (For that matter it barely touched on real world protest, even though the common law offence had been deployed against "sit-down" demonstrations in the 1960s.) Second, in recommending “serious annoyance” as a criterion it reformulated the common law offence in terms that, although intended to keep the statutory offence within clear bounds, may have the opposite result when applied to </span><span style="font-family: georgia;">online speech. </span><span style="font-family: georgia;">It is hard to avoid the impression that the question of what “serious annoyance” might mean when transposed from street to tweet was not on the Law Commission’s</span><span style="font-family: georgia;"> radar.</span><span style="font-family: georgia;"> </span></div><div><span style="font-family: georgia;"><br /></span></div><div><span style="font-family: georgia;">Before delving into those issues, some context is helpful. </span></div><div><span style="font-family: georgia;"><br />The Law Commission’s June 2015 report was the first product of a larger project to simplify the criminal law. The report <a href="https://www.lawcom.gov.uk/codifying-public-nuisance-and-outraging-public-decency/">recommended</a> that the common law public nuisance offence should be replaced with a statutory codification. The statutory offence would differ in some respects from the common law offence. The mental element would be set at intention or recklessness rather than negligence. The statutory offence would have to prescribe a maximum penalty. The Law Commission made no recommendation as to that, other than to observe that the maximum sentence should reflect that the offence was intended to address serious cases for which other offences were not adequate. The Policing Bill proposes a maximum custodial sentence of 10 years. <br /><br />The Law Commission’s proposals sat on the shelf for the best part of six years. Why the government has chosen this moment to blow the dust off them and poke a stick in the pond is a matter of speculation. Whatever the reason, the government has done it and now people are reading the wording of Section 59. They see “serious annoyance” and question how that wording would apply to street demonstrations. Equally, we can ask how it would apply to online behaviour. <br /><br /><b>The Law Commission did not consider online communications </b><br /><br />Neither the 2015 Law Commission Report nor its preceding consultation paper addressed how the codified offence, including the “serious annoyance” language, might apply to public online communications such as social media posts. The common law offence was certainly capable of doing so, as the Law Commission later acknowledged in its 2018 Scoping Report on Abusive and Offensive Online Communications. <br /><br />This extract from the 2015 Report illustrates how far removed the Law Commission’s focus was from online speech:<br /><blockquote>“In our view, its proper use is to protect the rights of members of the public to enjoy public spaces and use public rights (such as rights of way) without danger, interference or annoyance.” </blockquote></span><div><span style="font-family: georgia;">For whatever reason, the 2015 Report paid little attention to the possible effect of the public nuisance offence on freedom of expression, whether offline or online. It did note that its proposed reasonableness defence “would include cases where the defendant’s conduct is in exercise of a right under Article 10 (freedom of expression) or 11 (freedom of assembly and association) of the European Convention on Human Rights.” <br /><br />But it added in a footnote: “It is somewhat difficult to imagine examples in which this point arises in connection with public nuisance.” This comment is not easy to understand in the online context, where any application of the offence to an online post is likely to engage Article 10. Use of the common law offence against sit-down demonstrations in the 1960s also seems pertinent. <br /><br />Over-vigorous application of a statutory offence might be greeted in similar terms to those employed by the Lord Chief Justice in the Twitter Joke Trial case (<i>Chambers v DPP</i>), an appeal from conviction under s.127 of the Communications Act 2003:<br /><blockquote>“The 2003 Act did not create some newly minted interference with the first of President Roosevelt's essential freedoms – freedom of speech and expression. Satirical, or iconoclastic, or rude comment, the expression of unpopular or unfashionable opinion about serious or trivial matters, banter or humour, even if distasteful to some or painful to those subjected to it should and no doubt will continue at their customary level, quite undiminished by this legislation.”</blockquote>But when we are considering conversion of public nuisance into a statutory offence, is it enough to hope that what on the face of it looks like overly broad language (with concomitant chilling effects on speech) would be rescued by the ECHR? <br /><br /><b>The Law Commission’s reformulation <br /></b><br />The common law offence, as endorsed in 2005 in the leading House of Lords case of <i>Rimmington</i>, is articulated in terms of "endangering the comfort of the public". The Law Commission described that terminology as "somewhat archaic", "wide and vague" in everyday language, which "could include very trivial reasons for displeasure". It proposed instead: "serious distress, annoyance, inconvenience or loss of amenity". In Section 59 this is rendered as “serious distress, serious annoyance, serious inconvenience or serious loss of amenity”. <br /><br />The Law Commission evidently considered that by recommending a change in language from "endangering the comfort of the public" to "serious annoyance" it was narrowing the potential scope of the offence. It certainly intended to exclude the possibility of catching trivial displeasure. <br /><br />Yet, when applied to pure speech, the reformulation seems less constraining than the original. "Comfort" could be taken to connote a physical or sensory element that is not a requirement for "annoyance": consider the disruptive effect on the public of a hoax bomb threat, compared with public reaction to the contents of an offensive tweet. </span></div><div><span style="font-family: georgia;"><br /></span></div><div><span style="font-family: georgia;"><b>Back to Blackstone?</b><br /><br />If "annoyance" is a well understood term in relation to the common law offence, might that provide a basis on which to interpret Section 59 narrowly? </span><span style="font-family: georgia;">Blackstone referred to "nuisances that are an annoyance to all the King’s subjects". </span></div><div><span style="font-family: georgia;"><br />In the 1700s public nuisance concerned environmental and public health misdeeds such as "noisome and offensive stinks and smells", polluting the Thames, or taking a child infected with smallpox through a public street. </span></div><div><span style="font-family: georgia;"><br /></span></div><div><span style="font-family: georgia;">Whilst those readily fit the description of annoyances, how would that read across to speech? Can we even conceptualise a foul-smelling tweet? A noxious vapour and an obnoxious tweet are categorically different, one impinging on the senses and the other on the mind. Yet under Section 59 the courts would be asked to apply the same statutory language to both. The one context does not provide a guide to the other. <br /><br />As the Law Commission observed in its 2015 Report, the common law offence has expanded from those roots to cover such diverse behaviour as plotting to switch off the lights at a football match, threatening suicide by jumping from bridges, hosting acid house parties, hanging from bridges, jumping into a river during a boat race, sniffing glue in public, lighting flares or fireworks at football matches, or recording videos threatening bombings – what it called "general public misbehaviour". <br /><br />As the common law offence has developed since Blackstone to cover a greater variety of misbehaviour, correspondingly greater caution has to be exercised over the language used to characterise the elements of the offence.<br /><br /><b>Did the Law Commission mean to include online communications? </b><br /><br />If the Law Commission did not in its 2015 Report specifically consider the impact of its recommendations on online communications, might that be because the statutory offence was not intended to apply to them? <br /><br />As a largely technical exercise in codification, a proposed statutory offence would be expected to mirror the scope of the common law offence unless explicitly stated otherwise. <br /><br />As to the common law offence, Lord Nicholls in <i>Rimmington</i> posed the example of a hoax message of the existence of a public danger, such as a bomb in a railway station, communicated by telephone. That, he said, even if communicated to one person alone, would be a public nuisance because it was intended to be passed on to users of the railway station. If a message communicated in that way can be a public nuisance, then all the more so a tweet published directly to the world. <br /><br />If there were any doubt about that, the Law Commission acknowledged in its 2018 Scoping Report on Abusive and Offensive Online Communications that the common law offence is already capable of applying to public social media posts. The Law Commission identified overlap with, for instance, existing statutory harassment and communications offences. <br /><br />The 2015 Law Commission Report discussed the <i>Rimmington</i> judgment in detail. It did not suggest that misbehaviour covered by its proposed statutory offence should exclude electronic communications. The technology-neutral approach of Section 59 is no accident, even if the consequences for social media and internet communications were not discussed. <br /><br /><b>Would Section 59 be used against online behaviour? </b><br /><br />The 2018 Law Commission Scoping Report observed: “Given the wide array of statutory offences covering online harassment, it is difficult to see public nuisance being justifiably used in favour of these other offences in cases of online harassment and stalking.” <br /><br />It noted that the common law offence was “very broad in scope”. While acknowledging that the public nuisance offence could cover online behaviour, the Law Commission said that it was not aware of any prosecution. Nor does </span><span style="font-family: georgia;">the Crown Prosecution Service </span><a href="https://www.cps.gov.uk/legal-guidance/social-media-guidelines-prosecuting-cases-involving-communications-sent-social-media" style="font-family: georgia;">social media prosecution guidance</a><span style="font-family: georgia;"> mention public nuisance. </span></div><div><span style="font-family: georgia;"><br /></span></div><div><span style="font-family: georgia;">None of that, however, means that the same would hold true once the public nuisance offence is given statutory force. </span></div><div><span style="font-family: georgia;"><br />The Law Commission’s observation about justifiability of prosecution of the common law offence rests on the primacy given to statutory offences. As the Law Commission explained in its 2015 Report, there is a presumption against using a common law offence where the same territory is covered by a statutory offence. That falls away once the public nuisance offence itself becomes statutory. <br /><br />More fundamentally, there</span><span style="font-family: georgia;"> is nothing like a statutory codification to bring a common law offence back to full life and vigour. L</span><span style="font-family: georgia;">anguage embedded in a statute gains strength from the fact that it represents the explicit will of the legislature. No longer is the court incrementally developing a common law offence within the bounds of reasonable foreseeability in order to accommodate changing activities. For a statutory offence its task is to interpret specific words to which Parliament has expressly agreed. Once written down in a statute, words tend to take on a life of their own. The broader they are, the greater the potential for them to do so. </span></div><div><span style="font-family: georgia;"><br /><b>Prosecutorial guidance </b><br /><br />The Law Commission suggested that the effect of removing the presumption could be mitigated by development of prosecutorial guidance, which could state that the offence should not be used when a more specific offence is available except for good reasons. Prosecutorial discretion, however, is <a href="https://www.cyberleagle.com/2017/10/towards-filtered-internet-european.html#IllegalityFace">no substitute for an appropriately drawn offence</a>. Where speech is concerned, reliance on prosecutorial discretion is apt to produce the kind of uncertainty that gives rise to a chilling effect on freedom of expression. <br /><br />Even if relying on prosecutorial discretion to mitigate an over-broad offence were an acceptable way of proceeding in the past, for online speech it now has harmful consequences that do not apply offline. Why so? Because when online intermediaries (such as web hosts, discussion forums and social media platforms) are incentivised (or even, come the proposed Online Safety Bill, obliged on pain of regulatory sanctions) to remove illegal content, the test of illegality is not whether a prosecutor would decide to bring charges. It is whether the content falls within the letter of the statute. It matters more than ever before that the language of a statute should clearly and precisely catch only what it ought to catch and nothing more. <br /><br /><b>The canard of annoyance </b><br /><br />The Secretary of State for Justice Robert Buckland, during the Bill’s Commons Second Reading, suggested that concern about the term annoyance was a “canard”. He prayed in aid the authority of Lord Bingham:<br /><blockquote>“The law had been restated with reference to the use of the word “annoyance” by none other than the late and noble Lord Bingham when he was in the House of Lords. He set out the law very clearly. Clause 59 amounts to no more than a reiteration of the excellent work of the Law Commission. To say anything else is, frankly, once again a confection, a concoction and a twisting of the reality.”</blockquote>This presumably was a reference to Lord Bingham’s speech in <i>Rimmington</i>. Lord Bingham concluded that the common law public nuisance offence, interpreted in the way that he specified, passed the legality test:<br /><blockquote>“A legal adviser asked to give his opinion in advance would ascertain whether the act or omission contemplated was likely to inflict significant injury on a substantial section of the public exercising their ordinary rights as such: if so, an obvious risk of causing a public nuisance would be apparent; if not, not."</blockquote>Did Lord Bingham intend "significant injury" to include "serious annoyance"? The critical passage in his speech is at paragraph 36:<br /><blockquote>“I would for my part accept that the offence as defined by Stephen, as defined in Archbold (save for the reference to morals), as enacted in the Commonwealth codes quoted above and as applied in the cases (other than <i>R v Soul</i> 70 Cr App R 295) referred to in paras 13 to 22 above is clear, precise, adequately defined and based on a discernible rational principle.”</blockquote>The offence as defined by Stephen was quoted by Lord Bingham at para 10 of his speech. It does not include "annoyance". In paragraphs 9 and 10 he quoted the offence as defined in different editions of Archbold. Again there is no mention of "annoyance". He went on in paragraph 11 to examine the Commonwealth codes of Canada, Queensland and Tasmania. None of those mentions "annoyance". At paragraphs 13 to 22 of his speech he reviews numerous cases. None of the passages from judgments that he quotes mentions "annoyance". <br /><br />“Annoyance” was, however, mentioned in the two authorities that Lord Bingham quoted in paragraph 8 of his speech: Hawkins <i>Pleas of the Crown</i> (1716) Blackstone’s <i>Commentaries</i> (1768). Lord Bingham omitted both of those from the critical passage quoted above, endorsing only Stephen and Archbold. <br /><br />That leaves the reference in paragraph 10 of Lord Bingham’s speech to Section 268 of the Indian Penal Code of 1860. That Commonwealth provision includes the phrase "common injury, danger or annoyance". Lord Bingham commented that it seemed likely that the draftsman of that provision intended to summarise the English common law on public nuisance "as then understood". </span></div><div><span style="font-family: georgia;"><br /></span></div><div><span style="font-family: georgia;">Whatever may have been the position in 1860, today there is every reason to doubt whether the expression “serious annoyance” captures either the common law offence as it currently applies, or the statutory offence as it ought to apply, to public online communications.</span></div><div><span style="font-family: georgia;"><br /></span></div><div><span style="font-family: georgia;">[7 April 2021. Added “Commonwealth” to penultimate paragraph.] </span></div><div><span style="font-family: georgia;"><br /></span></div><div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjhgh8eo6maeGTSI5fayB3eUuHc1mydlHZDJtZV3vpclRxeGlclFzfFgfCJE0fjVIJ2IzW9cW7rb7LrZtfBu87FGWstRbiksP8FJMsGA6lpKr9CFDWBBxq71caXhs3bLfIay-jpQPO9wHtv/s135/snip2.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="134" data-original-width="135" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjhgh8eo6maeGTSI5fayB3eUuHc1mydlHZDJtZV3vpclRxeGlclFzfFgfCJE0fjVIJ2IzW9cW7rb7LrZtfBu87FGWstRbiksP8FJMsGA6lpKr9CFDWBBxq71caXhs3bLfIay-jpQPO9wHtv/s0/snip2.png" /></a></div><br /><span style="font-family: georgia;"><br /></span></div></div></div>Cyberleaglehttp://www.blogger.com/profile/17507190182464072147noreply@blogger.com0tag:blogger.com,1999:blog-229721367671779922.post-70139074284940408462021-02-07T18:26:00.011+00:002021-03-09T07:02:34.139+00:00Corrosion-proofing the UK’s intermediary liability protections<p><span style="font-family: georgia;">The UK having now cut its direct ties with EU law, what does
its future hold for the intermediary liability protections in Articles 12 to 15
of the Electronic Commerce Directive?</span></p><p class="MsoNormal"><span style="font-family: georgia;"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Until recently, the government’s policy has been taken to
be as stated in its 2019 <a href="https://web.archive.org/web/20210105143442/https:/www.gov.uk/guidance/the-ecommerce-directive-and-the-uk">“eCommerce
Directive guidance for businesses if there’s no Brexit deal”</a>: <o:p></o:p></span></p>
<p class="MsoNormal" style="margin-left: 36pt;"><span style="font-family: georgia;">“Immediately following the UK’s
exit from the EU in a no deal scenario, the government will minimise disruption
by prioritising continuity and stability. Therefore the UK’s policy approach
will continue to align with the provisions contained in the Directive,
including those on liability of intermediary service providers and general monitoring.”
<o:p></o:p></span></p>
<span style="font-family: georgia;">Consistently with that, in October 2020 the government published <a href="https://web.archive.org/web/20201016105650/https://www.gov.uk/guidance/the-ecommerce-directive-after-the-transition-period">post-transition guidance</a>, stating that it "has no current plans to change the UK’s intermediary liability regime or its approach to prohibition on general monitoring requirements".</span><p class="MsoNormal"><span style="font-family: georgia;">Articles 12 to 14 provide limitations on the liability of
conduits, caches and hosts for unlawful user information. <a href="https://www.cyberleagle.com/2017/05/time-to-speak-up-for-article-15.html" target="_blank">Article 15</a> prohibits
EU member states from imposing general monitoring obligations on those intermediaries.
Whether and how long the government’s commitment to Articles 12 to 15 would
survive was an open question. With nothing said in the UK-EU Trade and Co-Operation
Agreement about online intermediary liability, there appeared to be nothing to
prevent the government – should it wish to depart from its previous policy – from legislating
in future contrary to Articles 12 to 15 - subject always to the possibility of
a legal objection on fundamental rights grounds. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">There was a detectable drift away from the overt commitment to
Article 15 with the publication of the government’s <a href="https://www.cyberleagle.com/2020/12/the-online-harms-edifice-takes-shape.html" target="_blank">Full Consultation Response</a> to
the Online Harms White Paper, published on 15 December 2020. The Response strayed into
proposing proactive monitoring obligations that could not readily be reconciled
with that policy. That drift was also evident in the simultaneously published <a href="https://www.gov.uk/government/publications/online-harms-interim-codes-of-practice" target="_blank">Interim Voluntary Codes of Practice on Terrorism, and Online Child Sexual Exploitation and Abuse</a>, which
are in effect a template for obligations likely to be imposed under the future
Online Safety Bill. The Full Response was silent on the apparent conflict with Article
15.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Now, the government has dropped its commitment to maintain
alignment with Article 15. A new version of its post-Brexit <a href="https://www.gov.uk/guidance/the-ecommerce-directive-and-the-uk" target="_blank">eCommerce Directive guidance</a>, published on 18 January 2021, says this:<o:p></o:p></span></p>
<p class="MsoNormal" style="margin-left: 36pt;"><span style="font-family: georgia;">“The eCommerce Directive also
contains provisions relating to intermediary liability and prohibitions against
imposing general monitoring obligations.<o:p></o:p></span></p>
<p class="MsoNormal" style="margin-left: 36pt;"><span style="font-family: georgia;">The government is committed to
upholding the liability protections now that the transition period has ended.
For companies that host user-generated content on their online services, there
will continue to be a ‘notice and take down’ regime where the platform must
remove illegal content that they become aware of or risk incurring liability.<o:p></o:p></span></p>
<p class="MsoNormal" style="margin-left: 36pt;"><span style="font-family: georgia;">The government also intends to
introduce a new Online Safety regulatory framework. This will require companies
to take action to keep their users safe, including with regard to illegal
content. Details on what this will mean for companies are set out in the Online
Harms White Paper: Full government response to the consultation, and the
government plans to introduce legislation to Parliament this year.”<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Notably, although a commitment to preserving some kind of hosting
protection remains, there is now silence on preserving the prohibition on general monitoring
obligations. The significance of this omission can hardly be overstated.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Legislation that takes conscious bites out of the Directive’s
protections is one thing. But there is also a more subtle threat. Active
maintenance of the statute book will be needed if the liability protections to
which the government appears to be committed are not to be corroded by simple
neglect. The Article 12 to 14 protections for conduit, caching and hosting
activities are potentially liable to erode over time as the statute book is augmented
and amended. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The reason for this lies partly in the horizontal nature of
the protections. They are not tailored specifically to copyright, to
defamation, to obscenity, or to any of the other myriad kinds of criminal and
civil liability that might be incurred online.
<a href="https://www.cyberleagle.com/2018/04/the-electronic-commerce-directive.html" target="_blank">Articles 12 to 14 are shields</a> that apply across the board, whatever the subject
matter of the liability.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The risk of erosion lies in the way in which successive
governments have gone about legislating those protections. When the ECommerce
Directive was first implemented in UK law, the 2002 Regulations enacted the Art
12 to 14 liability protections across the board: they applied to all existing
laws under which liability within scope of the Directive might be incurred
(except for financial services, for which the protections were legislated
separately).<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">But, crucially, the 2002 Regulations stated that they did not
have prospective effect. This meant that they applied only to legislation in
existence when they came into force. On every occasion thereafter that a new
criminal offence or civil wrong was created, or an existing one amended, the
protections required by Arts 12 to 14 had to be specifically enacted for that
offence or civil wrong. Administrative Guidance on Consistency of Future
Legislation issued at the time by the Department of Trade and Industry <a href="https://webarchive.nationalarchives.gov.uk/20081023082947/http:/www.hgc.gov.uk/Client/Content_wide.asp?ContentId=506">stated</a>:<o:p></o:p></span></p>
<p class="MsoNormal" style="margin-left: 36pt;"><span style="font-family: georgia;">“Legislators will need to give
careful consideration to the question of whether any new requirements - again,
whether in primary, secondary or tertiary legislation and whether reserved or
devolved- create any offences which (or the aiding or abetting of which) could
possibly be committed by a mere conduit, cache or host within the meanings of
Regulations 17-19. If so, they will need to ensure that they recognise these
limitations on the liability of intermediary service providers. Similar
considerations will apply to any form of civil liability created by any new
requirements.”<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">In an ideal world, this would have been done within the primary legislation
that created the new or amended liability. Sometimes that happened. We can, for
instance, see the conduit, caching and hosting protections included in Schedule
1 of the Hate Crime and Public Order (Scotland) Bill currently making its way through the Scottish Parliament. Sometimes, however, it was
overlooked and the omission had to be remedied separately. Since the
protections were required by an EU Directive, the necessary provisions could be
enacted pre-Brexit by secondary legislation under the European Communities Act
1972. This was done on around 15 occasions, in addition to regulations implementing the protections for the financial services sector.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The result is a veritable hodgepodge of primary and
secondary legislation enacted over the best part of 20 years, implementing –
not always using the same language – the intermediary protections required by Articles 12 to 14 of the Directive. At
my last count, in addition to the 2002 Regulations themselves there were over 30 separate subject matter-specific
implementations dotted around different primary and secondary legislation – and I may well not have found
them all. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Post-Brexit, the option of plugging gaps via European
Communities Act is no longer available. There will therefore be a greater premium on
ensuring that, as in the Scottish Hate Crime Bill, the protections are included in the relevant legislation itself.
If active scrutiny and maintenance are neglected, and the requisite protections are omitted from future legislation that creates new offences and civil liability, there will be a slow accretion
of liabilities and offences to which the Directive’s conduit, caching and
hosting protections do not apply. </span></p><p class="MsoNormal"><span style="font-family: georgia;">If an omission has to be remedied, it would (unless
some usable order-making power that I have not spotted is buried somewhere in
the Brexit legislation) have to be done by further primary legislation. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">There is one potential qualification to this analysis. Could
“Retained EU law” under the 2018 Withdrawal Act give the Directive itself a
degree of post-Brexit prospective effect? If so, could the Directive’s liability
protections be invoked against a future offence created by post-Brexit
legislation which has omitted to address the liability position of conduits, hosts and
caches? I do not pretend to know the answer to that, other than noting that the
2002 DTI Administrative Guidance was in no doubt that the liability provisions
of the Directive had direct effect:<o:p></o:p></span></p>
<p class="MsoNormal" style="margin-left: 36pt;"><span style="font-family: georgia;">“If legislators fail to address
such issues or fail to make proper provisions, the Directive will have direct
effect in prohibiting them from imposing liability.”<o:p></o:p></span></p>
<span style="line-height: 107%;"><span style="font-family: georgia;">However, any potential retention of
direct effect would not offer any assistance in civil liability
cases, since direct effect of Directives has been limited to the state, and not extended horizontally to affect rights
as between private parties. For the Directive's intermediary liability protections the CJEU held as such in <i><a href="http://curia.europa.eu/juris/document/document.jsf;jsessionid=907F7B8CF9E42CB6452BBA93E712814F?text=&docid=157524&pageIndex=0&doclang=en&mode=lst&dir=&occ=first&part=1&cid=61350" target="_blank">Papasavvas</a></i> (C-291/13). </span></span><div><span style="font-family: georgia;"><br /></span></div><div><span style="font-family: georgia;"><span style="color: red;">[Amended 8 and 10 Feb 2021 to add reference to October 2020 government guidance; and 9 March 2021 to add reference to <i>Papasavvas</i>.]</span><br /></span><div><span style="font-family: georgia;"><br /></span><span style="line-height: 107%;"><span style="font-family: georgia;"><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjhgh8eo6maeGTSI5fayB3eUuHc1mydlHZDJtZV3vpclRxeGlclFzfFgfCJE0fjVIJ2IzW9cW7rb7LrZtfBu87FGWstRbiksP8FJMsGA6lpKr9CFDWBBxq71caXhs3bLfIay-jpQPO9wHtv/s135/snip2.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="134" data-original-width="135" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjhgh8eo6maeGTSI5fayB3eUuHc1mydlHZDJtZV3vpclRxeGlclFzfFgfCJE0fjVIJ2IzW9cW7rb7LrZtfBu87FGWstRbiksP8FJMsGA6lpKr9CFDWBBxq71caXhs3bLfIay-jpQPO9wHtv/s0/snip2.png" /></a></div></span></span><div><br /></div></div></div>Cyberleaglehttp://www.blogger.com/profile/17507190182464072147noreply@blogger.com0tag:blogger.com,1999:blog-229721367671779922.post-27194525180725636972020-12-28T17:09:00.002+00:002020-12-29T14:50:20.757+00:00Internet legal developments to look out for in 2021<p><span style="font-family: georgia;">Seven years ago I started to take an annual look at what the
coming year might hold for internet law in the UK. This exercise has
always, perforce, included EU law. With Brexit now fully upon us future
developments in EU law will no longer form part of UK law. Nevertheless, they
remain potentially influential: not least, because the 2018 EU Withdrawal Act
provides that UK courts may have regard to anything relevant done by the CJEU,
another EU entity or the EU after 31 December. In any case I am partial to a
bit of comparative law. So this survey will continue to keep significant EU law developments on its radar.</span></p><p class="MsoNormal"><span style="font-family: georgia;"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">What can we expect in 2021?<br />
<br />
<b>Copyright</b><br />
<b><br />
Digital Single Market </b>EU<b> </b>Member States are due to implement the
Digital Copyright Directive by 7 June 2021. This includes the so-called snippet
tax (the press publishers’ right) and the Article 17 rules for online sharing
service providers (OSSPs). The UK is not obliged to implement the Directive and
has said that it has <a href="https://questions-statements.parliament.uk/written-questions/detail/2020-01-16/4371">no
plans to do so</a>. Any future changes to the UK copyright framework will be “considered
as part of the usual domestic policy process”.<br />
<br />
The Polish government’s challenge to Article 17 (<a href="http://curia.europa.eu/juris/document/document.jsf?text=&docid=216823&pageIndex=0&doclang=en&mode=lst&dir=&occ=first&part=1&cid=7919548"><i>Poland
v Parliament and Council</i>, Case C-401/19</a>) is pending. Poland argues that
Article 17 makes it necessary for OSSPs, in order to avoid liability, to carry
out prior automatic filtering of content uploaded online by users, and
therefore to introduce preventive control mechanisms. It contends that such
mechanisms undermine the essence of the right to freedom of expression and
information and do not comply with the requirement that limitations imposed on
that right be proportionate and necessary.<br />
<br />
<b>Linking and communication to the public </b>The UK case of <a href="http://www.bailii.org/ew/cases/EWHC/Ch/2019/2923.html"><i>Warner
Music/Sony Music v TuneIn</i></a> is due to come before the Court of
Appeal early in 2021.<br />
<br />
<b>Pending CJEU copyright cases</b> Several copyright references are
pending before the EU Court of Justice. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The <i>YouTube</i> and <i>Uploaded </i>cases
(<a href="http://curia.europa.eu/juris/fiche.jsf?id=C%3B682%3B18%3BRP%3B1%3BP%3B1%3BC2018%2F0682%2FP&oqp=&for=&mat=or&lgrec=en&jge=&td=%3BALL&jur=C%2CT%2CF&num=C-682%252F18&dates=&pcs=Oor&lg=&pro=&nat=or&cit=none%252CC%252CCJ%252CR%252C2008E%252C%252C%252C%252C%252C%252C%252C%252C%252C%252Ctrue%252Cfalse%252Cfalse&language=en&avg=&cid=7921092">C-682/18 <i>Peterson
v YouTube</i></a> and <a href="http://curia.europa.eu/juris/fiche.jsf?id=C%3B683%3B18%3BRP%3B1%3BP%3B1%3BC2018%2F0683%2FP&oqp=&for=&mat=or&lgrec=en&jge=&td=%3BALL&jur=C%2CT%2CF&num=C-683%252F18&dates=&pcs=Oor&lg=&pro=&nat=or&cit=none%252CC%252CCJ%252CR%252C2008E%252C%252C%252C%252C%252C%252C%252C%252C%252C%252Ctrue%252Cfalse%252Cfalse&language=en&avg=&cid=7921161">C-683/18 <i>Elsevier
v Cyando</i></a>) referred from the German Federal Supreme Court include
questions around the communication to the public right, as do <a href="http://curia.europa.eu/juris/fiche.jsf?id=C%3B392%3B19%3BRP%3B1%3BP%3B1%3BC2019%2F0392%2FP&oqp=&for=&mat=or&lgrec=en&jge=&td=%3BALL&jur=C%2CT%2CF&num=C-392%252F19&dates=&pcs=Oor&lg=&pro=&nat=or&cit=none%252CC%252CCJ%252CR%252C2008E%252C%252C%252C%252C%252C%252C%252C%252C%252C%252Ctrue%252Cfalse%252Cfalse&language=en&avg=&cid=7921253">C-392/19 <i>VG
Bild-Kunst v Preussischer Kulturbesitz</i></a> (Germany, BGH), <a href="http://curia.europa.eu/juris/fiche.jsf?id=C%3B442%3B19%3BRP%3B1%3BP%3B1%3BC2019%2F0442%2FP&oqp=&for=&mat=or&lgrec=en&jge=&td=%3BALL&jur=C%2CT%2CF&num=C-442&dates=&pcs=Oor&lg=&pro=&nat=or&cit=none%252CC%252CCJ%252CR%252C2008E%252C%252C%252C%252C%252C%252C%252C%252C%252C%252Ctrue%252Cfalse%252Cfalse&language=en&avg=&cid=7921317">C-442/19 <i>Brein
v News Service Europe</i></a> (Netherlands, Supreme Court) and <a href="http://curia.europa.eu/juris/fiche.jsf?id=C%3B597%3B19%3BRP%3B1%3BP%3B1%3BC2019%2F0597%2FP&oqp=&for=&mat=or&lgrec=en&jge=&td=%3BALL&jur=C%2CT%2CF&num=C-597%252F19&dates=&pcs=Oor&lg=&pro=&nat=or&cit=none%252CC%252CCJ%252CR%252C2008E%252C%252C%252C%252C%252C%252C%252C%252C%252C%252Ctrue%252Cfalse%252Cfalse&language=en&avg=&cid=7921403">C-597/19 <i>Mircom
v Telenet</i></a> (Belgium). Advocate General Opinions have been delivered
in <i>YouTube/Cyando</i>, <i>VG Bildt-Kunst</i> and <i>Mircom</i>. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;"><i>YouTube/Cyando</i> and <i>Brein v News Service
Europe</i> also raise questions about copyright injunctions against
intermediaries, as does <a href="http://curia.europa.eu/juris/fiche.jsf?id=C%3B500%3B19%3BRP%3B1%3BP%3B1%3BC2019%2F0500%2FP&oqp=&for=&mat=or&lgrec=en&jge=&td=%3BALL&jur=C%2CT%2CF&num=C-500%252F19&dates=&pcs=Oor&lg=&pro=&nat=or&cit=none%252CC%252CCJ%252CR%252C2008E%252C%252C%252C%252C%252C%252C%252C%252C%252C%252Ctrue%252Cfalse%252Cfalse&language=en&avg=&cid=7921507">C-500/19 <i>Puls
4 TV</i></a>.<o:p></o:p></span></p>
<p class="MsoNormal"><b><span style="font-family: georgia;">Linking, search metadata and database right<o:p></o:p></span></b></p>
<p class="MsoNormal"><span style="font-family: georgia;"><a href="http://curia.europa.eu/juris/fiche.jsf?id=C%3B762%3B19%3BRP%3B1%3BP%3B1%3BC2019%2F0762%2FP&oqp=&for=&mat=or&lgrec=en&jge=&td=%3BALL&jur=C%2CT%2CF&num=C-762%252F19&dates=&pcs=Oor&lg=&pro=&nat=or&cit=none%252CC%252CCJ%252CR%252C2008E%252C%252C%252C%252C%252C%252C%252C%252C%252C%252Ctrue%252Cfalse%252Cfalse&language=en&avg=&cid=20664563">C-762/19
<i>CV-Online Latvia</i></a> is a CJEU referral from Riga Regional Court
concerning database right. The defendant search engine finds websites that
publish job advertisements and uses hyperlinks to redirect users to the source websites,
including that of the applicant. The defendant’s search results also include
information - hyperlink, job, employer, geographical location of the job, and
date – obtained from metatags on the applicant’s website published as
Schema.org microdata. The questions for the CJEU are whether (a) the use of a hyperlink
constitutes re-utilisation and (b) the use of the metatag data constitutes
extraction, for the purposes of database right infringement. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;"><b>Online intermediary liability<br />
</b><br />
The UK government published its <a href="https://www.gov.uk/government/consultations/online-harms-white-paper/outcome/online-harms-white-paper-full-government-response">Full
Consultation Response</a> to the <a href="https://www.gov.uk/government/consultations/online-harms-white-paper">Online
Harms White Paper</a> on 15 December 2020, paving the way for a draft Online
Safety Bill in 2021. The government has indicated that the draft Bill will be
subject to pre-legislative scrutiny.<br />
<br />
The German Federal Supreme Court has referred two cases (<i>YouTube</i> and <i>Cyando
</i>– see above) to the CJEU asking questions about (among other things) the
applicability of the ECommerce Directive hosting protections to UGC sharing
sites. The Advocate General’s Opinion in these cases has been published.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;"><i>Brein v News Service Europe</i> and <i>Puls 4 TV</i> (see
above for both) also ask questions around the Article 14 hosting protection,
including whether it is precluded if communication to the public is found.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The European Commission published its <a href="https://ec.europa.eu/digital-single-market/en/digital-services-act-package">proposals</a>
for a Digital Services Act and a Digital Markets Act on 15 December 2020. The
proposed Digital Services Act includes replacements for Articles 12 to 15 of
the ECommerce Directive.<span style="mso-spacerun: yes;"> </span>The proposals
will now proceed through the EU legislative process.<br />
<br />
The European Commission’s <a href="http://europa.eu/rapid/press-release_IP-18-5561_en.htm">Proposal for a
Regulation</a> on preventing the dissemination of terrorist content online
is nearing the final stages of its legislative process, the Council and
Parliament having reached <a href="https://ec.europa.eu/commission/presscorner/detail/en/ip_20_2372">political
agreement</a> on 10 December 2020. The proposed Regulation is notable for requiring
one hour takedown response times and also for proactive monitoring obligations
- potentially derogating from the ECommerce Directive <a href="https://www.cyberleagle.com/2017/05/time-to-speak-up-for-article-15.html">Article
15 prohibition</a> on imposing general monitoring obligations on conduits,
caches and hosts. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The prospect of a post-Brexit UK-US trade agreement has
prompted speculation that such an agreement might require the UK to adopt a
provision equivalent to the US S.230 Communications Decency Act. However, if
the US-Mexico-Canada Agreement precedent were adopted in such an agreement, that
would appear not to follow (as explained <a href="https://www.techdirt.com/articles/20200824/13595845171/intermediary-liability-responsibilities-post-brexit-graham-smith.shtml">here</a>).<b><br />
</b><br />
<b>Cross-border </b><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;"><span lang="EN-US" style="mso-ansi-language: EN-US;">The US and
the UK signed a </span><a href="https://www.gov.uk/government/news/uk-and-us-sign-landmark-data-access-agreement"><span lang="EN-US" style="mso-ansi-language: EN-US;">Data Access Agreement</span></a><span lang="EN-US" style="mso-ansi-language: EN-US;"> on 3 October 2019, providing
domestic law comfort zones for service providers to respond to data access
demands from authorities located in the other country. No announcement has yet been
made that Agreement has entered into operation. The Agreement has potential relevance in
the context of a post-Brexit UK data protection </span><a href="https://www.twobirds.com/en/news/articles/2017/uk/visions-of-adequacy-uk-surveillance-powers-after-brexit"><span lang="EN-US" style="mso-ansi-language: EN-US;">adequacy decision</span></a><span lang="EN-US" style="mso-ansi-language: EN-US;"> by the European Commission.</span><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;"><span lang="EN-US" style="mso-ansi-language: EN-US;">Discussions
continue on a </span><a href="https://www.coe.int/en/web/cybercrime/-/cybercrime-towards-a-protocol-on-evidence-in-the-cloud"><span lang="EN-US" style="mso-ansi-language: EN-US;">Second Protocol</span></a><span lang="EN-US" style="mso-ansi-language: EN-US;"> to the Cybercrime Convention,
on evidence in the cloud.</span><br />
<span lang="EN-US" style="mso-ansi-language: EN-US;"><br />
<b>State surveillance of communications</b></span><br />
<span lang="EN-US" style="mso-ansi-language: EN-US;"><br />
The kaleidoscopic mosaic of cases capable of affecting the UK’s </span><a href="http://www.legislation.gov.uk/ukpga/2016/25/contents/enacted"><span lang="EN-US" style="mso-ansi-language: EN-US;">Investigatory Powers Act 2016</span></a><span lang="EN-US" style="mso-ansi-language: EN-US;"> (IP Act) continues to reshape itself.
In this field CJEU judgments remain particularly relevant, since they form the
backdrop to any </span><a href="https://www.twobirds.com/en/news/articles/2017/uk/visions-of-adequacy-uk-surveillance-powers-after-brexit"><span lang="EN-US" style="mso-ansi-language: EN-US;">data protection adequacy</span></a><span lang="EN-US" style="mso-ansi-language: EN-US;"> decision that the European
Commission might adopt in respect of the UK post-Brexit. The recently agreed UK-EU
Trade and Co-operation Agreement provides a period of up to 6 months for the
Commission to propose and adopt an adequacy decision.<o:p></o:p></span></span></p>
<p class="MsoNormal"><span style="font-family: georgia;"><span lang="EN-US" style="mso-ansi-language: EN-US;">Relevant CJEU
judgments now include, most recently, <i>Privacy International</i> (</span><a href="http://curia.europa.eu/juris/fiche.jsf?id=C%3B623%3B17%3BRP%3B1%3BP%3B1%3BC2017%2F0623%2FP&oqp=&for=&mat=or&lgrec=en&jge=&td=%3BALL&jur=C%2CT%2CF&num=C-623%252F17&dates=&pcs=Oor&lg=&pro=&nat=or&cit=none%252CC%252CCJ%252CR%252C2008E%252C%252C%252C%252C%252C%252C%252C%252C%252C%252Ctrue%252Cfalse%252Cfalse&language=en&avg=&cid=7922841"><span lang="EN-US" style="mso-ansi-language: EN-US;">Case C-623/17</span></a><span lang="EN-US" style="mso-ansi-language: EN-US;">), La Quadrature du Net (</span><a href="http://curia.europa.eu/juris/liste.jsf?language=en&num=c-511/18&td=ALL"><span lang="EN-US" style="mso-ansi-language: EN-US;">C-511/18 and C-512/18</span></a><span lang="EN-US" style="mso-ansi-language: EN-US;">), and <i>Ordre des barreaux
francophones et germanophone</i> (</span><a href="http://curia.europa.eu/juris/document/document.jsf?text=&docid=207616&pageIndex=0&doclang=en&mode=req&dir=&occ=first&part=1&cid=5046170"><span lang="EN-US" style="mso-ansi-language: EN-US;">C-520/18</span></a><span lang="EN-US" style="mso-ansi-language: EN-US;">) (see discussion </span><a href="https://digitalbusiness.law/2020/11/eu-law-meets-state-communications-surveillance-what-consequences-for-uk-data-protection-adequacy/"><span lang="EN-US" style="mso-ansi-language: EN-US;">here</span></a><span lang="EN-US" style="mso-ansi-language: EN-US;"> and </span><a href="https://www.cyberleagle.com/2020/10/hard-questions-about-soft-limits.html"><span lang="EN-US" style="mso-ansi-language: EN-US;">here</span></a><span lang="EN-US" style="mso-ansi-language: EN-US;">).<o:p></o:p></span></span></p>
<p class="MsoNormal"><span style="font-family: georgia;"><span lang="EN-US" style="mso-ansi-language: EN-US;">Domestically,
Liberty has a pending judicial review of the IP Act bulk powers and data
retention powers. Some EU law aspects (including bulk powers) were stayed
pending the <i>Privacy International</i> reference to the CJEU. The Divisional Court <a href="https://www.bailii.org/ew/cases/EWHC/Admin/2018/975.html" target="_blank">rejected</a> the claim
that the IP Act data retention powers provide for the general and indiscriminate
retention of traffic and location data, contrary to EU law. That point
may in due course come before the Court of Appeal. </span><br />
<br />
<span lang="EN-US" style="mso-ansi-language: EN-US;">In the European Court of Human
Rights, Big Brother Watch and various other NGOs challenged the pre-IP Act bulk
interception regime under the Regulation of Investigatory Powers Act (RIPA). The
ECtHR gave a Chamber judgment on 13 September 2018. That and the Swedish <i><a href="https://hudoc.echr.coe.int/eng-press#{%22itemid%22:[%22003-6120023-7901747%22]}">Rattvisa</a></i> case
were subsequently referred to the ECtHR Grand Chamber and await judgment. If
the <i>BBW</i> Chamber judgment had become final it could have affected
the IP Act in as many as </span><a href="https://www.cyberleagle.com/2018/10/what-will-be-in-investigatory-powers.html"><span lang="EN-US" style="mso-ansi-language: EN-US;">three separate ways</span></a><span lang="EN-US" style="mso-ansi-language: EN-US;">.<o:p></o:p></span></span></p>
<p class="MsoNormal"><span style="font-family: georgia;"><span lang="EN-US" style="mso-ansi-language: EN-US;">In response
to one of the <i>BBW</i> findings the government has said that it will
introduce ‘thematic’ certification by the Secretary of State of requests to
examine bulk secondary data of individuals believed to be within the British
Islands.</span><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;"><b>Software - goods or services?</b><br />
<br />
Judgment is pending in the CJEU on a referral from the UK Supreme Court asking
whether software supplied electronically as a download and not on any tangible
medium constitutes goods and/or a sale for the purposes of the Commercial
Agents Regulations (<a href="http://curia.europa.eu/juris/documents.jsf?oqp=&for=&mat=or&lgrec=en&jge=&td=%3BALL&jur=C%2CT%2CF&num=C-410%252F19&page=1&dates=&pcs=Oor&lg=&pro=&nat=or&cit=none%252CC%252CCJ%252CR%252C2008E%252C%252C%252C%252C%252C%252C%252C%252C%252C%252Ctrue%252Cfalse%252Cfalse&language=en&avg=&cid=19998219">C-410/19</a>
<i>Computer Associates (UK) Ltd v The Software Incubator Ltd</i>). The <a href="http://curia.europa.eu/juris/document/document.jsf?text=&docid=235731&pageIndex=0&doclang=EN&mode=req&dir=&occ=first&part=1&cid=19998219">Advocate
General’s Opinion</a> was delivered on 17 December 2020. <o:p></o:p></span></p>
<p class="MsoNormal"><b><span style="font-family: georgia;">Law Commission projects<o:p></o:p></span></b></p>
<p class="MsoNormal"><span style="font-family: georgia;">The Law Commission has in train several projects that have
the potential to affect online activity. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">It is expected to make recommendations on reform of the
criminal law relating to <b>Harmful Online Communications</b> in early 2021.
The government has said that it will consider, where appropriate, implementing
the Law Commission’s final recommendations through the forthcoming Online
Safety Bill. The Law Commission issued a <a href="https://s3-eu-west-2.amazonaws.com/lawcom-prod-storage-11jsxou24uy7q/uploads/2020/09/Online-Communications-Consultation-Paper-FINAL-with-cover.pdf">consultation
paper</a> in September 2020 (consultation closed 18 December 2020).<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">The Law Commission has also issued a <a href="https://s3-eu-west-2.amazonaws.com/lawcom-prod-storage-11jsxou24uy7q/uploads/2020/10/Hate-crime-final-report.pdf">Consultation
Paper</a> on <b>Hate Crime Laws</b>, which while not specifically focused on
online behaviour inevitably includes it (consultation closed 24 December 2020).<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">It has recently launched a <a href="https://www.lawcom.gov.uk/project/smart-contracts/">Call for Evidence</a>
on <b>Smart Contracts</b> (closing 31 March 2021) and is also in the early
stages of a <a href="https://www.lawcom.gov.uk/project/digital-assets/">project</a>
on <b>Digital Assets</b>. <o:p></o:p></span></p>
<p class="MsoNormal"><b><span style="font-family: georgia;">Electronic transactions<o:p></o:p></span></b></p>
<p class="MsoNormal"><span style="font-family: georgia;">The pandemic has focused attention on legal obstacles to
transacting electronically and remotely. Whilst uncommon in commercial
transactions, some impediments do exist and, in a few cases, have been
temporarily relaxed. That may pave the way for permanent changes in due course.
<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-family: georgia;">Although the question typically asked is whether electronic
signatures can be used, the most significant obstacles tend to be presented by surrounding
formalities rather than signature requirements themselves. A case in point is
the physical presence requirement for witnessing deeds, which stands in the way
of remote witnessing by video or screen-sharing. The Law Commission Report on
Electronic Execution of Documents recommended that the government should set up
an Industry Working Group to look at that and other issues. </span><o:p></o:p></p><p class="MsoNormal"><span style="font-family: georgia;"><b>Data Protection </b></span></p><p class="MsoNormal"><span style="font-family: georgia;"></span></p><p class="MsoNormal"><span style="font-family: georgia;">Traditionally this survey does not cover data protection (too
big, and a dense specialism in its own right). On this occasion, however, the <i>Lloyd
v Google</i> appeal pending in the UK Supreme Court should not pass without notice.</span><o:p></o:p></p><p class="MsoNormal"><b style="font-family: georgia;">ePrivacy</b></p><p class="MsoNormal"><span style="font-family: georgia;">EU Member States had to implement the Directive establishing the <a href="https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:32018L1972&from=EN">European
Electronic Communications Code</a> (EECD) by 21 December 2020. The Code brings ‘over
the top’ messaging applications into the scope of ‘electronic communications
services’ for the purpose of the EU telecommunications regulatory framework. As a result, the
communications confidentiality provisions of the <a href="https://eur-lex.europa.eu/legal-content/EN/ALL/?uri=CELEX%3A32002L0058">ePrivacy
Directive</a> also came into scope, affecting practices such as scanning to
detect child abuse images. In order to enable such practices to continue, the
European Commission proposed <a href="https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELLAR:f9ee4b32-f353-11ea-991b-01aa75ed71a1">temporary
legislation</a> derogating from the ePrivacy Directive prohibitions. The
proposed Regulation missed the 21 December deadline and <a href="https://data.consilium.europa.eu/doc/document/ST-12084-2020-INIT/en/pdf">continues
through</a> the EU legislative process. <o:p></o:p></span></p><p class="MsoNormal">
</p><p class="MsoNormal"><span style="font-family: georgia;">Meanwhile there is as yet no conclusion to the long drawn out attempt to reach consensus on a <a href="https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52017PC0010">proposed
replacement</a> for the ePrivacy Directive itself.</span> <o:p></o:p></p><p class="MsoNormal"><span style="font-family: georgia;">[Updated 29 December 2020 to add sections on Data Protection and ePrivacy.] </span></p><p class="MsoNormal"><br /></p><p class="MsoNormal"></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjhgh8eo6maeGTSI5fayB3eUuHc1mydlHZDJtZV3vpclRxeGlclFzfFgfCJE0fjVIJ2IzW9cW7rb7LrZtfBu87FGWstRbiksP8FJMsGA6lpKr9CFDWBBxq71caXhs3bLfIay-jpQPO9wHtv/s135/snip2.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="134" data-original-width="135" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjhgh8eo6maeGTSI5fayB3eUuHc1mydlHZDJtZV3vpclRxeGlclFzfFgfCJE0fjVIJ2IzW9cW7rb7LrZtfBu87FGWstRbiksP8FJMsGA6lpKr9CFDWBBxq71caXhs3bLfIay-jpQPO9wHtv/s0/snip2.png" /></a></div><br /><span style="font-family: georgia;"><br /></span><p></p>Cyberleaglehttp://www.blogger.com/profile/17507190182464072147noreply@blogger.com0