Showing posts with label Regulation. Show all posts
Showing posts with label Regulation. Show all posts

Tuesday, 11 February 2025

The Online Safety Act grumbles on

Policymakers sometimes comfort themselves that if no-one is completely satisfied, they have probably got it about right. 

On that basis, Ofcom’s implementation of the Online Safety Act’s illegality duties must be near-perfection: the Secretary of State (DSIT) administering a sharp nudge with his draft Statement of Strategic Priorities, while simultaneously under fire for accepting Ofcom’s advice on categorisation of services; volunteer-led community forums threatening to close down in the face of perceived compliance burdens; and many of the Act’s cheerleaders complaining that Ofcom’s implementation has so far served up less substantial fare than they envisaged. 

As of now, an estimated 25,000 UK user-to-user and search providers (plus another 75,000 around the world) are meant to be busily engaged in getting their Illegal Harms risk assessments finished by 16 March. 

Today is Safer Internet Day. So perhaps spare a thought for those who are getting to grips with core and enhanced inputs, puzzling over what amounts to a ‘significant’ number of users, learning that a few risk factors may constitute ‘many’ (footnote 74 to Ofcom’s General Risk Level Table), or wondering whether their service can be ‘low risk’ if they allow users to post hyperlinks.  (Ofcom has determined that hyperlinks are a risk factor for six of the 17 kinds of priority offence designated by the Act: terrorism, CSEA, fraud and financial services, drugs and psychoactive substances, encouraging or assisting suicide and foreign interference offences). 

Grumbles from whichever quarter will come as no great surprise to those (this author included) who have argued from the start that the legislation is an ill-conceived, unworkable mess which was always destined to end in tears. Even so, and making due allowance for the well-nigh impossible task with which Ofcom has been landed, there is an abiding impression that Ofcom’s efforts to flesh out the service provider duties - risk assessment in particular – could have been made easier to understand. 

The original illegal harms consultation drew flak for its sheer bulk: a tad over 1,700 pages. The final round of illegal harms documents is even weightier: over 2,400 pages in all. It is in two parts. The first is a Statement. In accordance with Ofcom’s standing consultation principles, it aims to explain what Ofcom is going to do and why, showing how respondents’ views helped to shape Ofcom’s decisions. That amounts to 1,175 pages, including two summaries. 

The remaining 1,248 pages consist of statutory documents: those that the Act itself requires Ofcom to produce. These are a Register of Risks, Risk Assessment Guidance, Risk Profiles, Record Keeping and Review Guidance, a User to User Illegal Content Code of Practice, a Search Service Illegal Content Code of Practice, Illegal Content Judgements Guidance, Enforcement Guidance, and Guidance on Content Communicated Publicly and Privately. Drafts of the two Codes of Practice were laid before Parliament on 16 December 2024. Ofcom can issue them in final form upon completion of that procedure.

When it comes to ease of understanding, it is tempting to go on at length about the terminological tangles to be found in the documents, particularly around ‘harm’, ‘illegal harm’ and ‘kinds of illegal harm’. But really, what more is worth saying? Ofcom’s documents are, to all intents and purposes, set in stone. Does it help anyone to pen another few thousand words bemoaning opaque language? Other than in giving comfort that they are not alone to those struggling to understand the documents, probably not. Everyone has to get on and make the best of it.

So one illustration will have to suffice. ‘Illegal harm’ is not a term defined or used in the Act. In the original consultation documents Ofcom’s use of ‘illegal harm’ veered back and forth between the underlying offence, the harm caused by an offence, and a general catch-all for the illegality duties; often leaving the reader to guess in which sense it was being used. 

The final documents are improved in some places, but introduce new conundrums in others. One of the most striking examples is paragraph 2.35 and Table 6 of the Risk Assessment Guidance (emphasis added to all quotations below). 

Paragraph 2.35 says: 

“When evaluating the likelihood of a kind of illegal content occurring on your service and the chance of your service being used to commit or facilitate an offence, you should ask yourself the questions set out in Table 6.”

Table 6 is headed: 

“What to consider when assessing the likelihood of illegal content

The table then switches from ‘illegal content’ to ‘illegal harm’. The first suggested question in the table is whether risk factors indicate that: 

“this kind of illegal harm is likely to occur on your service?” 

‘Illegal harm’ is footnoted with a reference to a definition in the Introduction: 

“the physical or psychological harm which can occur from a user encountering any kind of illegal content…”. 

So what is the reader supposed to be evaluating: the likelihood of occurrence of illegal content, or the likelihood of physical or psychological harm arising from such content? 

If ‘Illegal Harm’ had been nothing more than a title that Ofcom gave to its illegality workstream, then what the term actually meant might not have mattered very much. But the various duties that the Act places on service providers, and even Ofcom’s own duties, rest on carefully crafted distinctions between illegal content, underlying criminal offences and harm (meaning physical or psychological harm) arising from such illegality. 

That can be seen in this visualisation. It illustrates the U2U service provider illegality duties - both risk assessment and substantive - together with the Ofcom duty to prepare an illegality Risks Register and Risk Profiles.  The visualisation divides the duties into four zones (A, B, C and D), explained below. 

A: The duties in this zone require U2U providers to assess certain risks related to illegal content (priority and non-priority). These risks are independent of and unrelated to harm. The risks to be assessed have no direct counterpart in any of the substantive safety duties in Section 10. Their relevance to those safety duties probably lies in the proportionality assessment of measures to fulfil the Section 10 duties. 

Although the service provider’s risk assessment has to take account of the Ofcom Risk Profile that relates to its particular kind of service, Ofcom’s Risk Profiles are narrower in scope than the service provider risk assessment. Under the Act Ofcom’s Risks Register and Risk Profiles are limited to the risk of harm (meaning physical or psychological harm) to individuals in the UK presented by illegal content present on U2U services and by the use of such services for the commission or facilitation of priority offences. 

B:  This zone contains harm-related duties (identified in yellow): Ofcom Risk Profiles, several service provider risk assessment duties framed by reference to harm, plus the one substantive Section 10 duty framed by reference to harm (fed by the results of the harm-related risk assessment duties). Harm has its standard meaning in the Act: physical or psychological harm. 

C: This zone contains two service provider risk assessment duties which are independent of and unrelated to risk of harm, but which feed directly into a corresponding substantive Section 10 duty. 

D: This zone contains the substantive Section 10 duties: one based on harm and three which stand alone. Those three are not directly coupled to the service provider’s risk assessment.

This web of duties is undeniably complex. One can sympathise with the challenge of rendering it into a practical and readily understandable risk assessment process capable of feeding the substantive duties.  Nevertheless, a plainer and more consistently applied approach to terminology in Ofcom's documents would have paid dividends.



Saturday, 17 June 2023

Shifting paradigms in platform regulation

[Based on a keynote address to the conference on Contemporary Social and Legal Issues in a Social Media Age held at Keele University on 14 June 2023.]

First, an apology for the title. Not for the rather sententious ‘shifting paradigms’ – this is, after all, an academic conference – but ‘platform regulation’. If ever there was a cliché that cloaks assumptions and fosters ambiguity, ‘platform regulation’ is it.

Why is that? For three reasons.

First, it conceals the target of regulation. In the context with which we are concerned users – not platforms – are the primary target. In the Online Safety Bill model, platforms are not the end. They are merely the means by which the state seeks to control – regulate, if you like - the speech of end-users.

Second, because of the ambiguity inherent in the word regulation. In its broad sense it embraces everything from the general law of the land that governs – regulates, if you like – our speech to discretionary, broadcast-style, regulation by regulator: the Ofcom model. If we think – and I suspect many don’t - that the difference matters, then to have them all swept up together under the banner of regulation is unhelpful.

Third, because it opens the door to the kind of sloganising with which we have become all too familiar over the course of the Online Harms debate: the unregulated Internet; the Wild West Web; ungoverned online spaces.

What do they mean by this?
  • Do they mean that there is no law online? Internet Law and Regulation has 750,000 words that suggest otherwise.
  • Do they mean that there is law but it is not enforced? Perhaps they should talk to the police, or look at new ways of providing access to justice.
  • Do they mean that there is no Ofcom online? That is true – for the moment - but the idea that individual speech should be subject to broadcast-style regulation rather than the general law is hardly a given. Broadcast regulation of speech is the exception, not the norm.
  • Do they mean that speech laws should be stricter online than offline? That is a proposition to which no doubt some will subscribe, but how does that square with the notion of equivalence implicit in the other studiously repeated mantra: that what is illegal offline should be illegal online?
The sloganising perhaps reached its nadir when the Joint Parliamentary Committee scrutinising the draft Online Safety Bill decided to publish its Report under the strapline: ‘No Longer the Land of the Lawless’ - 100% headline-grabbing clickbait – adding, for good measure: “A landmark report which will make the tech giants abide by UK law”.

Even if the Bill were about tech giants and their algorithms – and according to the government’s Impact Assessment 80% of in-scope UK service providers will be micro-businesses – at its core the Bill seeks not to make tech giants abide by UK law, but to press platforms into the role of detective, judge and bailiff: to require them to pass judgment on whether we – the users - are abiding by UK law. That is quite different.

What are the shifting paradigms to which I have alluded?

First the shift from Liability to Responsibility

Go back twenty-five years and the debate was all about liability of online intermediaries for the unlawful acts of their users. If a user’s post broke the law, should the intermediary also be liable and if so in what circumstances? The analogies were with phone companies and bookshops or magazine distributors, with primary and secondary publishers in defamation, with primary and secondary infringement in copyright, and similar distinctions drawn in other areas of the law.

In Europe the main outcome of this debate was the E-Commerce Directive, passed at the turn of the century and implemented in the UK in 2002. It laid down the well-known categories of conduit, caching and hosting. Most relevantly to platforms, for hosting it provided a liability shield based on lack of knowledge of illegality. Only if you gained knowledge that an item of content was unlawful, and then failed to remove that content expeditiously, could you be exposed to liability for it. This was closely based on the bookshop and distributor model.

The hosting liability regime was – and is – similar to the notice and takedown model of the US Digital Millennium Copyright Act – and significantly different from the US S.230 Communications Decency Act 1996, which was more closely akin to full conduit immunity.

The E-Commerce Directive’s knowledge-based hosting shield incentivises – but does not require – a platform to remove user content on gaining knowledge of illegality. It exposes the platform to risk of liability under the relevant underlying law. That is all it does. Liability does not automatically follow.

Of course the premise underlying all of these regimes is that the user has broken some underlying substantive law. If the user hasn’t broken the law, there is nothing that the platform could be liable for.

It is pertinent to ask – for whose benefit were these liability shields put in place? There is a tendency to frame them as a temporary inducement to grow the then nascent internet industry. Even if there was an element of that, the deeper reason was to protect the legitimate speech of users. The greater the liability burden on platforms, the greater their incentive to err on the side of removing content, the greater the risk to legitimate speech and the greater the intrusion on the fundamental speech rights of users. The distributor liability model adopted in Europe, and the S.230 conduit model in the USA, were for the protection of users as much, if not more so, than for the benefit of platforms.

The Shift to Responsibility has taken two forms.

First, the increasing volume of the ‘publishers not platforms’ narrative. The view is that platforms are curating and recommending user content and so should not have the benefit of the liability shields. As often and as loudly as this is repeated, it has gained little legislative traction. Under the Online Safety Bill the liability shields remain untouched. In the EU Digital Services Act the shields are refined and tweaked, but the fundamentals remain the same. If, incidentally, we think back to the bookshop analogy it was never the case that a bookshop would lose its liability shield if it promoted selected books in its window, or decided to stock only left wing literature.

Second, and more significantly, has come a shift towards imposing positive obligations on platforms. Rather than just being exposed to risk of liability for failing to take down users’ illegal content, a platform would be required to do so on pain of a fine or a regulatory sanction. Most significant is when the obligation takes the form of a proactive obligation: rather than awaiting notification of illegal user content, the platform must take positive steps proactively to seek out, detect and remove illegal content.

This has gained traction in the UK Online Safety Bill, but not in the EU Digital Services Act. There is in fact 1800 divergence between the UK and the EU on this topic. The DSA repeats and re-enacts the principle first set out in Article 15 of the eCommerce Directive: the EU prohibition on Member States imposing general monitoring obligations on conduits, caches and hosts. Although the DSA imposes some positive diligence obligations on very large operators, those still cannot amount to a general monitoring obligation.

The UK, on the other hand, has abandoned its original post-Brexit commitment to abide by Article 15, and – under the banner of a duty of care - has gone all out to impose proactive, preventative detection and removal duties on platforms – for public forums and also including powers for Ofcom to require private messaging services to scan for CSEA content.

Proactive obligations of this kind raise serious questions about a state’s compliance with human rights law, due to the high risk that in their efforts to determine whether user content is legal or illegal, platforms will end up taking down users’ legitimate speech at scale. Such legal duties on platforms are subject to especially strict scrutiny, since they amount to a version of prior restraint: removal before full adjudication on the merits, or – in the case of upload filtering – before publication.

The most commonly cited reason for these concerns is that platforms will err on the side of caution when faced with the possibility of swingeing regulatory sanctions. However, there is more to it than that: the Online Safety Bill requires platforms to make illegality judgements on the basis of all information reasonably available to them. But an automated system operating in real time will have precious little information available to it – hardly more than the content of the posts. Arbitrary decisions are inevitable.

Add that the Bill requires the platform to treat user content as illegal if it has no more than “reasonable grounds to infer” illegality, and we have baked-in over-removal at scale: a classic basis for incompatibility with fundamental freedom of speech rights; and the reason why in 2020 the French Constitutional Council held the Loi Avia unconstitutional.

The risk of incompatibility with fundamental rights is in fact twofold – first, built-in arbitrariness breaches the ‘prescribed by law’ or ‘legality’ requirement: that the user should be able to foresee, with reasonable certainty, whether what they are about to post is liable to be affected by the platform’s performance of its duty; and second, built-in over-removal raises the spectre of disproportionate interference with the right of freedom of expression.

From Illegality to Harm

For so long as the platform regulation debate centred around liability, it also had to be about illegality: if the user’s post was not illegal, there was nothing to bite on - nothing for which the intermediary could be held liable.

But once the notion of responsibility took hold, that constraint fell away. If a platform could be placed under a preventative duty of care, that could be expanded beyond illegality. That is what happened in the UK. The Carnegie UK Trust argued that platforms ought to be treated analogously to occupiers of physical spaces and owe a duty of care to their visitors, but extended to encompass types of harm beyond physical injury.

The fundamental problem with this approach is that speech is not a tripping hazard. Speech is not a projecting nail, or an unguarded circular saw, that will foreseeably cause injury – with no possibility of benefit – if someone trips over it. Speech is nuanced, subjectively perceived and capable of being reacted to in as many different ways as there are people. A duty of care is workable for risk of objectively ascertainable physical injury but not for subjectively perceived and contested harms, let alone more nebulously conceived harms to society. The Carnegie approach also glossed over the distinction between a duty to avoid causing injury and a duty to prevent others from injuring each other (imposed only exceptionally in the offline world).

In order to discharge such a duty of care the platform would have to balance the interests of the person who claims to be traumatised by reading something to which they deeply object, against the interests of the speaker, and against the interests of other readers who may have a completely different view of the merits of the content.

That is not a duty that platforms are equipped, or could ever have the legitimacy, to undertake; and if the balancing task is entrusted to a regulator such as Ofcom, that is tantamount to asking Ofcom to write a parallel statute book for online speech – something which many would say should be for Parliament alone.

The misconceived duty of care analogy has bedevilled the Online Harms debate and the Bill from the outset. It is why the government got into such a mess with ‘legal but harmful for adults’ – now dropped from the Bill.

The problems with subjectively perceived harm are also why the government ended up abandoning its proposed replacement for S.127(1) of the Communications Act 2003: the harmful communications offence.

From general law to discretionary regulation

I started by highlighting the difference between individual speech governed by the general law and regulation by regulator. We can go back to the 1990s and find proposals to apply broadcast-style discretionary content regulation to the internet. The pushback was equally strong. Broadcast-style regulation was the exception, not the norm. It was borne of spectrum scarcity and had no place in governing individual speech.

ACLU v Reno (the US Communications Decency Act case) applied a medium-specific analysis to the internet and placed individual speech – analogised to old-style pamphleteers – at the top of the hierarchy, deserving of greater protection from government intervention than cable or broadcast TV.

In the UK the key battle was fought during the passing of the Communications Act 2003, when the internet was deliberately excluded from the content remit of Ofcom. That decision may have been based more on practicality than principle, but it set the ground rules for the next 20 years.

It is instructive to hear peers with broadcast backgrounds saying what a mistake it was to exclude the internet from Ofcom’s content remit in 2003 - as if broadcast is the offline norm and as if Ofcom makes the rules about what we say to each other in the street.

I would suggest that the mistake is being made now – both by introducing regulation by regulator and in consigning individual speech to the bottom of the heap.

From right to risk

The notion has gained ground that individual speech is a fundamental risk, not a fundamental right: that we are not to be trusted with the power of public speech, it was a mistake ever to allow anyone to speak or write online without the moderating influence of an editor, and by hook or by crook the internet genie must be stuffed back in its bottle.

Other shifts

We can detect other shifts. The blossoming narrative that if someone does something outrageous online, the fault is more with the platform than with the perpetrator. The notion that platforms have a greater responsibility than parents for the online activities of children. The relatively recent shift towards treating large platforms as akin to public utilities on which obligations not to remove some kinds of user content can legitimately be imposed. We see this chiefly in the Online Safety Bill’s obligations on Category 1 platforms in respect of content of democratic importance, news publisher and journalistic content.

From Global to Local

I want to finish with something a little different: the shift from Global to Local. Nowadays we tend to have a good laugh at the naivety of the 1990s cyberlibertarians who thought that the bits and bytes would fly across borders and there was not a thing that any nation state could do about it.

Well, the nation states had other ideas, starting with China and its Great Firewall. How successfully a nation state can insulate its citizens from cross-border content is still doubtful, but perhaps more concerning is the mindset behind an increasing tendency to seek to expand the territorial reach of local laws online – in some cases, effectively seeking to legislate for the world.

In theory a state may be able to do that. But should it? The ideal is peaceful coexistence of conflicting national laws, not ever more fervent efforts to demonstrate the moral superiority and cross-border reach of a state’s own local law. Over the years a de facto compromise had been emerging, with the steady expansion of the idea that you engage the laws and jurisdiction of another state only if you take positive steps to target it. Recently, however, some states have become more expansive – not least in their online safety legislation.

The UK Online Safety Bill is a case in point, stipulating that a platform is in-scope if it is capable of being used in the United Kingdom by individuals, and there are reasonable grounds to believe that there is a material risk of significant harm to individuals in the United Kingdom presented by user content on the site.

That is close to a ‘mere accessibility’ test – but not as close as the Australian Online Safety Act, which brings into scope any social media site accessible from Australia.

There has long been a consensus against ‘mere accessibility’ as a test for jurisdiction. It leads either to geo-fencing of websites or to global application of the most restrictive common content denominator. That consensus seems to be in retreat.

Moreover, the more exorbitant the assertion of jurisdiction, the greater the headache of enforcement. Which in turn leads to what we see in the UK Online Safety Bill, namely provisions for disrupting the activities of the non-compliant foreign platform: injunctions against support services such as banking or advertising, and site blocking orders against ISPs.

The concern has to be that in their efforts to assert themselves and their local laws online, nation states are not merely re-erecting national borders with a degree of porosity, but erecting Berlin Walls in cyberspace.