Saturday 17 June 2023

Shifting paradigms in platform regulation

[Based on a keynote address to the conference on Contemporary Social and Legal Issues in a Social Media Age held at Keele University on 14 June 2023.]

First, an apology for the title. Not for the rather sententious ‘shifting paradigms’ – this is, after all, an academic conference – but ‘platform regulation’. If ever there was a cliché that cloaks assumptions and fosters ambiguity, ‘platform regulation’ is it.

Why is that? For three reasons.

First, it conceals the target of regulation. In the context with which we are concerned users – not platforms – are the primary target. In the Online Safety Bill model, platforms are not the end. They are merely the means by which the state seeks to control – regulate, if you like - the speech of end-users.

Second, because of the ambiguity inherent in the word regulation. In its broad sense it embraces everything from the general law of the land that governs – regulates, if you like – our speech to discretionary, broadcast-style, regulation by regulator: the Ofcom model. If we think – and I suspect many don’t - that the difference matters, then to have them all swept up together under the banner of regulation is unhelpful.

Third, because it opens the door to the kind of sloganising with which we have become all too familiar over the course of the Online Harms debate: the unregulated Internet; the Wild West Web; ungoverned online spaces.

What do they mean by this?
  • Do they mean that there is no law online? Internet Law and Regulation has 750,000 words that suggest otherwise.
  • Do they mean that there is law but it is not enforced? Perhaps they should talk to the police, or look at new ways of providing access to justice.
  • Do they mean that there is no Ofcom online? That is true – for the moment - but the idea that individual speech should be subject to broadcast-style regulation rather than the general law is hardly a given. Broadcast regulation of speech is the exception, not the norm.
  • Do they mean that speech laws should be stricter online than offline? That is a proposition to which no doubt some will subscribe, but how does that square with the notion of equivalence implicit in the other studiously repeated mantra: that what is illegal offline should be illegal online?
The sloganising perhaps reached its nadir when the Joint Parliamentary Committee scrutinising the draft Online Safety Bill decided to publish its Report under the strapline: ‘No Longer the Land of the Lawless’ - 100% headline-grabbing clickbait – adding, for good measure: “A landmark report which will make the tech giants abide by UK law”.

Even if the Bill were about tech giants and their algorithms – and according to the government’s Impact Assessment 80% of in-scope UK service providers will be micro-businesses – at its core the Bill seeks not to make tech giants abide by UK law, but to press platforms into the role of detective, judge and bailiff: to require them to pass judgment on whether we – the users - are abiding by UK law. That is quite different.

What are the shifting paradigms to which I have alluded?

First the shift from Liability to Responsibility

Go back twenty-five years and the debate was all about liability of online intermediaries for the unlawful acts of their users. If a user’s post broke the law, should the intermediary also be liable and if so in what circumstances? The analogies were with phone companies and bookshops or magazine distributors, with primary and secondary publishers in defamation, with primary and secondary infringement in copyright, and similar distinctions drawn in other areas of the law.

In Europe the main outcome of this debate was the E-Commerce Directive, passed at the turn of the century and implemented in the UK in 2002. It laid down the well-known categories of conduit, caching and hosting. Most relevantly to platforms, for hosting it provided a liability shield based on lack of knowledge of illegality. Only if you gained knowledge that an item of content was unlawful, and then failed to remove that content expeditiously, could you be exposed to liability for it. This was closely based on the bookshop and distributor model.

The hosting liability regime was – and is – similar to the notice and takedown model of the US Digital Millennium Copyright Act – and significantly different from the US S.230 Communications Decency Act 1996, which was more closely akin to full conduit immunity.

The E-Commerce Directive’s knowledge-based hosting shield incentivises – but does not require – a platform to remove user content on gaining knowledge of illegality. It exposes the platform to risk of liability under the relevant underlying law. That is all it does. Liability does not automatically follow.

Of course the premise underlying all of these regimes is that the user has broken some underlying substantive law. If the user hasn’t broken the law, there is nothing that the platform could be liable for.

It is pertinent to ask – for whose benefit were these liability shields put in place? There is a tendency to frame them as a temporary inducement to grow the then nascent internet industry. Even if there was an element of that, the deeper reason was to protect the legitimate speech of users. The greater the liability burden on platforms, the greater their incentive to err on the side of removing content, the greater the risk to legitimate speech and the greater the intrusion on the fundamental speech rights of users. The distributor liability model adopted in Europe, and the S.230 conduit model in the USA, were for the protection of users as much, if not more so, than for the benefit of platforms.

The Shift to Responsibility has taken two forms.

First, the increasing volume of the ‘publishers not platforms’ narrative. The view is that platforms are curating and recommending user content and so should not have the benefit of the liability shields. As often and as loudly as this is repeated, it has gained little legislative traction. Under the Online Safety Bill the liability shields remain untouched. In the EU Digital Services Act the shields are refined and tweaked, but the fundamentals remain the same. If, incidentally, we think back to the bookshop analogy it was never the case that a bookshop would lose its liability shield if it promoted selected books in its window, or decided to stock only left wing literature.

Second, and more significantly, has come a shift towards imposing positive obligations on platforms. Rather than just being exposed to risk of liability for failing to take down users’ illegal content, a platform would be required to do so on pain of a fine or a regulatory sanction. Most significant is when the obligation takes the form of a proactive obligation: rather than awaiting notification of illegal user content, the platform must take positive steps proactively to seek out, detect and remove illegal content.

This has gained traction in the UK Online Safety Bill, but not in the EU Digital Services Act. There is in fact 1800 divergence between the UK and the EU on this topic. The DSA repeats and re-enacts the principle first set out in Article 15 of the eCommerce Directive: the EU prohibition on Member States imposing general monitoring obligations on conduits, caches and hosts. Although the DSA imposes some positive diligence obligations on very large operators, those still cannot amount to a general monitoring obligation.

The UK, on the other hand, has abandoned its original post-Brexit commitment to abide by Article 15, and – under the banner of a duty of care - has gone all out to impose proactive, preventative detection and removal duties on platforms – for public forums and also including powers for Ofcom to require private messaging services to scan for CSEA content.

Proactive obligations of this kind raise serious questions about a state’s compliance with human rights law, due to the high risk that in their efforts to determine whether user content is legal or illegal, platforms will end up taking down users’ legitimate speech at scale. Such legal duties on platforms are subject to especially strict scrutiny, since they amount to a version of prior restraint: removal before full adjudication on the merits, or – in the case of upload filtering – before publication.

The most commonly cited reason for these concerns is that platforms will err on the side of caution when faced with the possibility of swingeing regulatory sanctions. However, there is more to it than that: the Online Safety Bill requires platforms to make illegality judgements on the basis of all information reasonably available to them. But an automated system operating in real time will have precious little information available to it – hardly more than the content of the posts. Arbitrary decisions are inevitable.

Add that the Bill requires the platform to treat user content as illegal if it has no more than “reasonable grounds to infer” illegality, and we have baked-in over-removal at scale: a classic basis for incompatibility with fundamental freedom of speech rights; and the reason why in 2020 the French Constitutional Council held the Loi Avia unconstitutional.

The risk of incompatibility with fundamental rights is in fact twofold – first, built-in arbitrariness breaches the ‘prescribed by law’ or ‘legality’ requirement: that the user should be able to foresee, with reasonable certainty, whether what they are about to post is liable to be affected by the platform’s performance of its duty; and second, built-in over-removal raises the spectre of disproportionate interference with the right of freedom of expression.

From Illegality to Harm

For so long as the platform regulation debate centred around liability, it also had to be about illegality: if the user’s post was not illegal, there was nothing to bite on - nothing for which the intermediary could be held liable.

But once the notion of responsibility took hold, that constraint fell away. If a platform could be placed under a preventative duty of care, that could be expanded beyond illegality. That is what happened in the UK. The Carnegie UK Trust argued that platforms ought to be treated analogously to occupiers of physical spaces and owe a duty of care to their visitors, but extended to encompass types of harm beyond physical injury.

The fundamental problem with this approach is that speech is not a tripping hazard. Speech is not a projecting nail, or an unguarded circular saw, that will foreseeably cause injury – with no possibility of benefit – if someone trips over it. Speech is nuanced, subjectively perceived and capable of being reacted to in as many different ways as there are people. A duty of care is workable for risk of objectively ascertainable physical injury but not for subjectively perceived and contested harms, let alone more nebulously conceived harms to society. The Carnegie approach also glossed over the distinction between a duty to avoid causing injury and a duty to prevent others from injuring each other (imposed only exceptionally in the offline world).

In order to discharge such a duty of care the platform would have to balance the interests of the person who claims to be traumatised by reading something to which they deeply object, against the interests of the speaker, and against the interests of other readers who may have a completely different view of the merits of the content.

That is not a duty that platforms are equipped, or could ever have the legitimacy, to undertake; and if the balancing task is entrusted to a regulator such as Ofcom, that is tantamount to asking Ofcom to write a parallel statute book for online speech – something which many would say should be for Parliament alone.

The misconceived duty of care analogy has bedevilled the Online Harms debate and the Bill from the outset. It is why the government got into such a mess with ‘legal but harmful for adults’ – now dropped from the Bill.

The problems with subjectively perceived harm are also why the government ended up abandoning its proposed replacement for S.127(1) of the Communications Act 2003: the harmful communications offence.

From general law to discretionary regulation

I started by highlighting the difference between individual speech governed by the general law and regulation by regulator. We can go back to the 1990s and find proposals to apply broadcast-style discretionary content regulation to the internet. The pushback was equally strong. Broadcast-style regulation was the exception, not the norm. It was borne of spectrum scarcity and had no place in governing individual speech.

ACLU v Reno (the US Communications Decency Act case) applied a medium-specific analysis to the internet and placed individual speech – analogised to old-style pamphleteers – at the top of the hierarchy, deserving of greater protection from government intervention than cable or broadcast TV.

In the UK the key battle was fought during the passing of the Communications Act 2003, when the internet was deliberately excluded from the content remit of Ofcom. That decision may have been based more on practicality than principle, but it set the ground rules for the next 20 years.

It is instructive to hear peers with broadcast backgrounds saying what a mistake it was to exclude the internet from Ofcom’s content remit in 2003 - as if broadcast is the offline norm and as if Ofcom makes the rules about what we say to each other in the street.

I would suggest that the mistake is being made now – both by introducing regulation by regulator and in consigning individual speech to the bottom of the heap.

From right to risk

The notion has gained ground that individual speech is a fundamental risk, not a fundamental right: that we are not to be trusted with the power of public speech, it was a mistake ever to allow anyone to speak or write online without the moderating influence of an editor, and by hook or by crook the internet genie must be stuffed back in its bottle.

Other shifts

We can detect other shifts. The blossoming narrative that if someone does something outrageous online, the fault is more with the platform than with the perpetrator. The notion that platforms have a greater responsibility than parents for the online activities of children. The relatively recent shift towards treating large platforms as akin to public utilities on which obligations not to remove some kinds of user content can legitimately be imposed. We see this chiefly in the Online Safety Bill’s obligations on Category 1 platforms in respect of content of democratic importance, news publisher and journalistic content.

From Global to Local

I want to finish with something a little different: the shift from Global to Local. Nowadays we tend to have a good laugh at the naivety of the 1990s cyberlibertarians who thought that the bits and bytes would fly across borders and there was not a thing that any nation state could do about it.

Well, the nation states had other ideas, starting with China and its Great Firewall. How successfully a nation state can insulate its citizens from cross-border content is still doubtful, but perhaps more concerning is the mindset behind an increasing tendency to seek to expand the territorial reach of local laws online – in some cases, effectively seeking to legislate for the world.

In theory a state may be able to do that. But should it? The ideal is peaceful coexistence of conflicting national laws, not ever more fervent efforts to demonstrate the moral superiority and cross-border reach of a state’s own local law. Over the years a de facto compromise had been emerging, with the steady expansion of the idea that you engage the laws and jurisdiction of another state only if you take positive steps to target it. Recently, however, some states have become more expansive – not least in their online safety legislation.

The UK Online Safety Bill is a case in point, stipulating that a platform is in-scope if it is capable of being used in the United Kingdom by individuals, and there are reasonable grounds to believe that there is a material risk of significant harm to individuals in the United Kingdom presented by user content on the site.

That is close to a ‘mere accessibility’ test – but not as close as the Australian Online Safety Act, which brings into scope any social media site accessible from Australia.

There has long been a consensus against ‘mere accessibility’ as a test for jurisdiction. It leads either to geo-fencing of websites or to global application of the most restrictive common content denominator. That consensus seems to be in retreat.

Moreover, the more exorbitant the assertion of jurisdiction, the greater the headache of enforcement. Which in turn leads to what we see in the UK Online Safety Bill, namely provisions for disrupting the activities of the non-compliant foreign platform: injunctions against support services such as banking or advertising, and site blocking orders against ISPs.

The concern has to be that in their efforts to assert themselves and their local laws online, nation states are not merely re-erecting national borders with a degree of porosity, but erecting Berlin Walls in cyberspace.