Monday 22 July 2024

The Online Safety Act illegality duties - a regime about content?

Part 3 of a short series of reflections on Ofcom’s Illegal Harms consultation under the Online Safety Act 2023 (OSA). 

This post analyses the illegality duties created by the OSA. That is not as simple as one might hope. The intrepid reader has to hack their way through a thicket of separately defined, subtly differing, duties. Meanwhile they must try to cope with a proliferation of competing public narratives about what the Act is - or ought to be - doing, such as the suggestion that the regime is really about systems and processes, not content. 

The specific safety duties imposed by the OSA are the logical starting point for an analysis of what the Act requires of service providers. I have categorised the duties in a way which, it is hoped, will illuminate how Ofcom has approached the consultation.  

At the highest level the Act’s illegality duties are either substantive or risk assessment. The substantive duties may require the service provider to take measures affecting the design or operation of the service. The outcomes of the obligatory risk assessment feed into some of the substantive duties.

In my proposed categorisation the substantive service provider duties fall into three categories: content-based, non-content-based and harm-based. A content-based duty is framed solely by reference to illegal content, with no mention of harm. A non-content-based duty makes no mention of either content or harm, but may refer to illegal activity. A harm-based duty is framed by reference to harm arising from or caused by illegal content or activity.  

The tables below divide the various substantive duties into these categories.

The risk assessment duties are a more extensive and complex mixture of duties framed by reference to illegal content, illegal activities and harm.

The differences in framing are significant, particularly for the substantive duties, since they dictate the kinds of measures that could satisfy the different kinds of duty. For instance:

An operational substantive content-based duty is focused on identifying and addressing items of illegal content — such as by reactively removing and taking them down. Recommended measures have to reflect that. (But see discussion below of the difference between a design duty and an operational duty in the context of a proactive, preventive duty.)  

substantive duty to mitigate risk of harm is less focused. A recommended measure would not necessarily involve evaluating the illegality of content. It could consist of a measure that either does not affect content at all, or does so content-agnostically. A reporting button would be an example of such a measure, as would a cap on the permissible number of reposts for all content across the board.

But a recommended measure within this category might yet have content-related aspects. It could be suggested, for instance, that in order to mitigate the risk of harm arising from a particular kind of illegal content the provider should identify content of that kind and take recommended steps in relation to it.

Similarly a risk assessment duty could involve the service provider in evaluating individual items of content, even if the duty ranges more widely: 

“Your risk assessment should not be limited to an assessment of individual pieces of content, but rather consider how your service is used overall. In particular, the requirement to assess the risk of the service being used to commit or facilitate priority offences may mean considering a range of content and behaviour that may not amount to illegal content by itself.” [Service Risk Assessment Guidance, Boxout preceding para A5.42.]

It can be seen in the table below that the harm-based risk assessment duties are further divided into content-related and non-content related. The former all make some reference to illegal content in the context of a harm-based duty.

Within the framework of the Act, not all illegality is necessarily harmful (in the Act's defined sense) and not all kinds of harm are in scope. There is a conceptual separation between illegality per se, illegality that causes (or risks causing) harm as defined, and harm that may arise otherwise than from illegality. Those category distinctions have to be borne in mind when analysing the illegality duties imposed by the Act. It bears repeating that when the Act refers to harm in connection with service provider duties it has a specific, limited meaning: physical or psychological harm.

So equipped, we can categorise and tabulate the illegality duties under the Act. For simplicity the table shows only the duties applicable to all U2U service providers, starting with the substantive duties: 

Category

Description

OSA section reference

Substantive duties

 

 

Content-based

Proportionate measures relating to design or operation of service to prevent individuals encountering priority illegal content by means of the service.

10(2)(a)

 

Operate service using proportionate systems and processes designed to minimise the time for which priority illegal content is present.

10(3)(a)

 

Operate service using proportionate systems and processes designed to swiftly take down illegal content where the provider is alerted or otherwise becomes aware of its presence

10(3)(b)

Non-content-based

Proportionate measures relating to design or operation of service to effectively mitigate and manage the risk of the service being used for the commission or facilitation of a priority offence identified in the most recent illegal content risk assessment.

10(2)(b)

Harm-based

Proportionate measures relating to design or operation of service to effectively mitigate and manage the risks of harm to individuals identified in the most recent illegal content risk assessment (see 9(2)(g)).

10(2)(c)

 These are the illegality risk assessment duties:

Category

Description

OSA section reference

Risk assessment duties

 

 

Content-based

Assess level of risk of users encountering illegal content of each kind

9(5)(b)

 

Assess level of risk of functionalities facilitating presence or dissemination of illegal content

9(5)(e)

Non-content-based

Assess level of risk of service being used for commission or facilitation of a priority offence

9(5)(c)

 

Assess level of risk of functionalities facilitating use of service for commission or facilitation of a priority offence

9(5)(e)

 

Assess how the design and operation of the service may reduce or increase the risks identified.

9(5)(h)

Harm-based (content-related)

Assess level of risk of harm to individuals presented by illegal content of different kinds

9(5)(d)

 

Assess nature and severity of harm that might be suffered by individuals from identified risk of harm to individuals presented by illegal content of different kinds

9(5)(d) and (g)

 

Assess nature and severity of harm that might be suffered by individuals from identified risk of individual users encountering illegal content

9(5)(b) and (g)

 

Assess nature and severity of harm that might be suffered by individuals from identified risk of functionalities facilitating presence or dissemination of illegal content

9(5)(e) and (g)

Harm-based (non-content-related)

Assess level of risk of harm to individuals presented by use of service for commission or facilitation of a priority offence

9(5)(d)

 

Assess nature and severity of harm that might be suffered by individuals from identified level of risk of harm to individuals presented by use of service for commission or facilitation of a priority offence

9(5)(d) and (g)

 

Assess nature and severity of harm that might be suffered by individuals from identified risk of service being used for commission or facilitation of a priority offence

9(5)(c) and (g)

 

Assess different ways in service is used and impact of such use on level of risk of harm that might be suffered by individuals

9(5)(f)

 

Assess nature and severity of harm that might be suffered by individuals from identified risk of different ways in service is used and impact of such use on level of risk of such harm

9(5)(f) and (g)

The various harm-based illegality risk assessment duties feed in to the substantive harm mitigation and management duty of S.10(2)(c). That duty stands independently of the three content-based and one non-content-based illegality duties, as can be seen from the tables and is illustrated in this visualisation.


Content or systems and processes?

In March last year the Ofcom CEO drew a contrast between systems and processes and a content regime, suggesting that the OSA is about the former and not really the latter (see Introduction). 

The idea that the Act is about systems and processes, not content, prompts a close look at the differences in framing of the substantive illegality duties. As can be seen from the table above, the proactive prevention duty in S.10(2)(a) requires the service provider to:

“to take or use proportionate measures relating to the design or operation of the service to… prevent individuals from encountering priority illegal content by means of the service” (emphasis added)

Thus a measure recommended by Ofcom to comply with this duty could be limited to the design of the service, reflecting the ‘safe by design’ ethos mentioned in Section 1(3) of the Act. As such, the S.10(2)(a) duty, although content-based, has no necessary linkage to assessing the illegality of individual items of user content posted to the service. That possibility, however, is not excluded. A recommended proactive measure could involve proactively detecting and blocking individual items, as contemplated by the section’s reference to operational measures.

In contrast, the content removal duty in S.10(3)(b) is specifically framed in terms of use of operational systems and processes: 

“(3) A duty to operate a service using proportionate systems and processes designed to … (b) where the provider is alerted by a person to the presence of any illegal content, or becomes aware of it in any other way, swiftly take down such content” (emphasis added)

In view of this drafting it is not surprising that, notwithstanding the reference to proportionate systems and processes, Ofcom in several places characterises the Act as imposing a simple duty to remove illegal content: 

"When services make an illegal content judgement in relation to particular content and have reasonable grounds to infer that the content is illegal, the content must however be taken down." [para 26.14, Illegal Content Judgements Guidance discussion]

“Within the illegal content duties there are a number of specific duties. As part of the illegal content safety duty at section 10(3)(b) of the Act, there is a duty for a user-to-user service to “swiftly take down” any illegal content when it is alerted to the presence of it (the ‘takedown duty’).” [para A1.14, draft Illegal Content Judgements Guidance]

“As in the case of priority illegal content, services are required to take down relevant nonpriority illegal content swiftly when they are made aware of it.” [para 2.41, Volume 1]

“Where the service is alerted by a person to the presence of any illegal content, or becomes aware of it in any other way, services must swiftly take down such content (section 10(3)(b)).” [para 12.6, Volume 4]

The framing of the S.10(3)(b) duty is tightly focused on the removal actions to be taken by a service provider in relation to individual items of content when it becomes aware of them. The service provider must have proportionate systems and processes designed to achieve the result (take down), and must use them in the operation of the service. 

This duty inescapably requires individual judgements to be made. It does not, however, from a supervision and enforcement point of view, require a service provider to get every individual judgement right.

From the perspective of the service provider the focus of Ofcom’s supervision and enforcement will indeed be on its systems and processes:

“It is important to make clear that, as the regulator, Ofcom will not take a view on individual pieces of online content. Rather, our regulatory approach is to ensure that services have the systems and processes in place to meet their duties.” [Overview, p.17; Volume 4, p.19]

“In line with the wider approach to delivering online safety, supervision will focus on the effectiveness of services’ systems and processes in protecting their users, not on individual pieces of content.” [para 30.5, Volume 6]

That does not mean however, that (as Ofcom appears to suggest in the following extract) the regulatory regime is not about regulating individual user content: 

“It is important to note that the Online Safety regime is about service providers’ safety systems and processes, not about regulating individual content found on such services. The presence of illegal content or content that is potentially harmful to children does not necessarily mean that a service provider is failing to fulfil its duties in the Act. We would not therefore be likely to take action solely based on a piece of harmful content appearing on a regulated service.” [Enforcement Guidance, para A3.6]

The purpose of at least the operational content-based aspects of the OSA regime is to harness the control that intermediary service providers can exercise over their users in order (indirectly) to regulate individual items of user content. The fact that a service provider may not be penalised for an individual misjudgement does not alter the fact that the service provider has to have a system in place to make judgements and that the user in question stands to be affected when their individual post is removed as a result of a misjudgement. The operational content-based duties exist and are about regulating individual content.

Parenthetically, it is also perhaps doubtful whether Ofcom can, in reality, perform its role entirely in the abstract and avoid making its own judgements on individual items of content. For instance, if prevalence of illegal material on a service is relevant to Ofcom’s enforcement priority assessment, how could that be determined without evaluating whether individual items of user content on the service are illegal?

Some may believe that Ofcom's approach is too heavily weighted towards individual content judgements by service providers. However, it is difficult to see how such content-based measures can be avoided when content-based duties — removal and takedown of individual items of illegal content in the operation of the service - are hardwired into the Act, requiring the exercise of judgement to determine whether the content in question is or is not illegal.

That said, in the area of the S.10(2) proactive preventive measures Ofcom has, as already alluded to, much broader scope to assess proportionality and decide what kinds of measures to recommend (or not) and what safeguards should accompany them. That will be the subject of Part 4 of this series.




The Online Safety Act illegality duties: a regime about harm?

This is Part 2 of a short series of reflections on Ofcom's Illegal Harms Consultation under the Online Safety Act 2023 (OSA). Ofcom are currently in the process of considering submissions following closure of the consultation in February 2o24.

The very title of the Ofcom consultation — Illegal Harms — prompts questions about the illegality duties. Are they about illegal content? Are they about harm? Is all illegal content necessarily harmful? What does the Act mean by harm?

What is meant by harm?

The answer to the last question ought to be simple. For the purpose of the safety duties harm means “physical or psychological harm". For the remaining questions, the devil resides in the tangled undergrowth of the Act (discussed in Part 3, analysing and categorising the Act's Illegal Content duties). 

However, the Act’s specific meaning of harm is often glossed over.  Ofcom’s Quick Guide to illegal content risk assessments mentions ‘harm’ or ‘illegal harm’ 16 times without pointing out that for the purpose of the safety duties harm has a defined meaning. Considering how many of the illegal content risk assessment duties are framed by reference to harm (see table in Part 3), it is striking that neither the consultation section explaining Ofcom’s approach to the risk assessment duty (Volume 3), nor the draft Illegality Risk Assessment Guidance itself (Annex 5), mentions the Act’s specific meaning of harm.

The four page consultation Overview is similarly lacking, while mentioning ‘harm’ or ‘illegal harm’ 16 times. Neither is the definition mentioned in Ofcom’s 39 page summary of each chapter of the consultation, nor in the consultation’s 38 page Volume 1 Background to the Online Safety regime.

At the start of Volume 2 of the consultation (the Ofcom Register of Risks for illegal content) we do find:

“The Online Safety Act (the Act) requires Ofcom to carry out sector-wide risk assessments to identify and assess the risk of physical and psychological harm to individuals in the UK presented by regulated user-to-user (U2U) and search services, and to identify characteristics relevant to such risks of harm.” [para 5.1]

The footnote to that paragraph says:

“‘Risks of harm’ refers to the harm to individuals presented by (a) content on U2U or search services that may amount to the offences listed in the Act, and (b) the use of U2U services for the commission and/or facilitation of these offences (collectively, the ‘risks of harm’). ‘Harm’ means physical or psychological harm; we discuss physical or psychological harm as part of our assessment of the risks of harm.”

The Register of Risks section then continues to emphasise the Act’s specific meaning of harm.  Of the four separate glossaries and lists of definitions contained in the consultation papers, only the Register of Risks Glossary includes the Act’s definition of harm. 

A footnote to a paragraph in the Register of Risks section which references an Ofcom survey acknowledges the risk of overreach in unbounded references to harm:

“63% of internet users 13 years old and over had seen or experienced something potentially harmful in the past four weeks.[Note: these may capture a broad range of potentially harmful experiences that go beyond illegal harms] Source: Ofcom, 2022. Online Experiences Tracker. [accessed 10 September 2023].” [footnote 22, para 6.1]

To give an example, the survey in question prompted respondents that potential harm included “Generally offensive or ‘bad’ language, e.g. swearing, rudeness”. 

The OSA's illegality duties have not generally embraced the broader and vaguer concepts of societal harm that were discussed at the time of the Online Harms White Paper. Nevertheless, the Ofcom consultation is not immune from straying into the territory of societal harm:

“In most cases the harms we have looked at primarily affect the individual experiencing them. However, in some cases they have a wider impact on society as a whole. For instance, state-sponsored disinformation campaigns can erode trust in the democratic process. All this underlines the need for the new legislation and shows that, while many services have made significant investments in tackling online harm in recent years, these have not yet been sufficient.” [Boxout, Volume 2, p.8]

A state-sponsored disinformation campaign could be relevant to this consultation only if it constitutes an offence within the purview of the Act (e.g. the new Foreign Interference offence). Even then, only a risk of physical or psychological harm to an individual could be relevant to determining what kinds of illegality safety duty might be triggered: content-based, non-content-based or harm-based (as to which, see the discussion in Part 3 of the different categories of duty created by the Act)For the purpose of the Act's illegality safety duties the “wider impact on society as a whole” is not a relevant kind of harm. 

Returning to the consultation title, the term ‘Illegal Harm’ does not appear in the Act. The consultation Glossary essays a definition: 

"Harms arising from illegal content and the commission and facilitation of priority offences".

Volume 1, which describes the illegal content duties, contains a slightly fuller version:

“‘illegal harm’ – this refers to all harm arising from the relevant offences set out in the Act, including harm arising from the presence of illegal content online and (where relevant) the use of online services to commit or facilitate priority offences. …” [Volume 1, para 1.23]

‘Harm’, as already mentioned, is defined in the Act as physical or psychological harm. 

If that definition of Illegal Harm is meant to describe the overall subject-matter of the illegality duties it is incomplete, since in its terms it can apply only to those illegality duties that are framed by reference to harm.

In any event the consultation does not use the phrase ‘Illegal Harms’ consistently with that definition.  It sometimes reads as a catch-all for the illegality duties generally. Often it refers to the underlying offences themselves rather than the harm arising from them. Thus on the second page of the Consultation at a Glance: “The 15 different kinds of illegal harms set out in Ofcom’s draft risk assessment guidance are: Terrorism offences…”.

In places it is difficult to be sure in what sense the term ‘illegal harm’ is being used, or it changes from one paragraph to the next. An example is in the Risk Assessment Guidance:

“A5.23 You must assess the risk of each kind of illegal harm occurring on your service. U2U services need to consider the risk of:

Illegal content appearing on the service – for example, content inviting support for a proscribed organisation (e.g. a terrorist group);

An offence being committed using the service – for example, a messaging service being used to commit grooming offences, in a situation where adults can use the service to identify and contact children they do not know; and

An offence being facilitated by use of the service – for example, the use of an ability to comment on content to enable harassment.

A5.24 You must assess the likelihood of these illegal harms taking place, and the potential impact (i.e. the nature and severity of harm to individuals).

A5.25 As long as you are covering all of the risks of harm to individuals, you can assess these three aspects together when you assess each kind of illegal harm.”

In para A5.23 ‘illegal harm’ is being used to describe three different varieties of risk assessment duty: Sections 9(5)(b),(c) and (e). None of those is framed in the Act by reference to harm. Nor does the Glossary definition of illegal harm fit the usage in this paragraph. 

Paras A5.24 and A5.25 then refer to harm to individuals. That reflects a duty which is framed by reference to harm (Section 9(5)(g)). It is a duty to which the defined meaning of physical or psychological harm applies.

Relatedly, shortly afterwards the Guidance says:

“In the risk assessment, the key objective is for you to consider in a broad way how your service may be used in a way that leads to harm.” (Box-out following A5.41)

It does not mention the Act’s definition of harm (assuming, as presumably it must do, that that is what the Guidance means by harm in this context).

Shorthand is probably unavoidable in the challenging task of rendering the consultation and the OSA understandable. But when it comes to describing a key aspect of the safety duties, shorthand can result in confusion rather than clarity. The term 'illegal harm' is especially difficult, and might have been best avoided. Given its specific meaning in the Act, rigorous and clear use of the term ‘harm' is called for.


Reflections on Ofcom's Illegal Harms consultation

Shortly after the Online Safety Act (OSA) gained Royal Assent in October 2023, Ofcom issued a 1728 page consultation on Illegal Harms. This was the first step in Ofcom's lengthy journey towards implementing and giving concrete substance to the various duties that the Act will place on user-to-user (U2U) service providers and search engines.

The output of the Illegal Harms process (one of several consultations that Ofcom has to undertake) will be Codes of Practice, accompanied by Guidance documents on specific topics mandated by the Act, plus a Register of Risks. Ofcom anticipates the final versions of the Codes coming into force around the end of 2024 or the beginning of 2025.

The weight of the Illegal Harms consultation confounded even those of us who have argued that the OSA’s design is misconceived and would inevitably result in a legal and regulatory quagmire. Then, just a week before the Ofcom consultation closed in February 2024, the Information Commissioner's Office added its own contribution: 47 pages of guidance on how data protection law applies to online content moderation processes, including moderation carried out to comply with duties under the OSA. In June 2024 the ICO invited feedback on that.

The significance of Ofcom’s Codes of Practice is that a service provider is deemed to comply with the Act’s safety duties if it implements the recommendations of a Code of Practice. Ofcom’s Guidance documents are intended to assist service providers in implementing the Codes. 

The consultation documents are not for the faint-hearted. The draft Guidance on Illegal Content Judgements, for instance, runs to 390 pages (perhaps unsurprisingly when, as Ofcom notes, the Act lists over 130 priority offences to which the safety duties apply). Parliamentarians may have assumed that determining whether user content is illegal is a simple matter for a service provider. The Guidance, unsurprisingly, reveals otherwise.

Some will think that the Ofcom consultation over-emphasises content moderation at the expense of ‘safety by design’ measures, which would not necessarily depend on distinguishing between legal and illegal user content.

Indeed, Ofcom itself has previously downplayed the content moderation aspects of the regime. In March last year the head of Ofcom, Melanie Dawes, told POLITICO that the then Online Safety Bill was:

"not really a regime about content. It's about systems and processes. It's about the design of the (social media) service that does include things like the recommender algorithms and how they work"

When the Bill gained Royal Assent in October 2023, she told the BBC that: 

"Ofcom is not a censor, and our new powers are not about taking content down. Our job is to tackle the root causes of harm."

A few weeks later the Illegal Harms consultation stated that the "main" duties relating to illegal content are for services:

"to assess the risk of harm arising from illegal content ... or activity on their service, and take proportionate steps to manage and mitigate those risks." (Volume 1, Introduction.)

The introductory Section 1 of the Act, added in the Bill’s later stages, similarly emphasises risk of harm: 

"[T]his Act (among other things) ... imposes duties which, in broad terms, require providers of services regulated by this Act to identify, mitigate and manage the risks of harm (including risks which particularly affect individuals with a certain characteristic) from illegal content and activity..."

That introductory section, however, is not – and does not purport to be  a complete description of the safety duties. The Act also imposes content-based duties: duties that are expressly framed in terms of, for instance, removing illegal content. Those sit alongside the duties that are expressed in other ways, such as managing and mitigating harm (defined in the Act as “physical or psychological harm”).

The Bill's Impact Assessment estimated that the largest proportion of compliance costs (£1.9 billion over 10 years) would be incurred in increased content moderation. It is not a surprise that content moderation features strongly in the consultation.

The overall impression given by the consultation is that Ofcom is conscious of the challenges presented by requiring service providers to adjudge the illegality of user content as a preliminary to blocking or removal, all the more so if they must also act proactively to detect it. Ofcom has made few recommendations for automated proactive detection and blocking of user content. However, it may be about to dip its toes further into those perilous waters: 

“…we are planning an additional consultation later this year on how automated tools, including AI, can be used to proactively detect illegal content and content most harmful to children – including previously undetected child sexual abuse material.” (A window into young children’s online worlds, 19 April 2024)

Ofcom also appears to have sought, within the constraints of the Act, to minimise the compliance burden on small and medium businesses, individual service providers and non-commercial entities. Politicians may have convinced themselves that the legislation is all about big US social media companies and their algorithms. Ofcom has drawn the short straw of implementing the Act as it actually is: covering (according to the Impact Assessment) an estimated 25,000 UK businesses alone, 80% of which are micro-businesses (fewer than 10 employees).

That much by way of general introduction. After several attempts ended up bogged down in the quagmire, I have finally composed this short series of reflections on selected aspects of the Illegal Harms consultation and what it reveals about the challenges of giving concrete form to the Act. 

Part 2 of the series takes a more detailed look at how the harm-based aspects of the service provider duties have played out in the consultation. Part 3 categorises the U2U service provider duties (content-based, non-content-based and harm-based) and analyses the ‘systems and processes’ versus ‘content’ dichotomy.  There may be more to come...



Wednesday 22 May 2024

Internet jurisdiction revisited

As legal and policy topics go, cross-border internet jurisdiction is evocative of a remote but restless volcano: smouldering away mostly unnoticed by public and lawyers alike, only to burst spectacularly into life at odd intervals.

The latest eruption has occurred in Australia, where last month the Australian eSafety Commissioner launched legal proceedings for an injunction against X Corp (Twitter) requiring it to remove or hide from all users worldwide a video of a stabbing attack on an Australian bishop. X Corp argues that geo-blocking the content from Australian users is sufficient. The eSafety Commissioner disagrees, since Australian users equipped with VPNs can evade the block. In a judgment published on 14 May 2024 Kennett J, a judge of the Federal Court of Australia, sided with X Corp on an interim basis and declined to continue a previously granted emergency injunction.

The underlying policy issue is that the internet is readily perceived as undermining local laws, since material posted on the internet outside the local jurisdiction is, by default, available worldwide. The Canadian Supreme Court in Equustek put it thus:

“Where it is necessary to ensure the injunction’s effectiveness, a court can grant an injunction enjoining conduct anywhere in the world. The problem in this case is occurring online and globally. The Internet has no borders — its natural habitat is global. The only way to ensure that the interlocutory injunction attained its objective was to have it apply where Google operates — globally.”

The countervailing concern is that when a court acts in that way in order to secure the effectiveness of its local law, it is asserting the right to impose that law on the rest of the world, where the material in question may be legal. Assertion of extraterritorial jurisdiction has always had the potential to create friction between nation states. When the internet arrived, its inherent cross-border nature created additional policy tensions that, 30 or more years on, have yet to be fully resolved.

The background to the current dispute is Australia’s Online Safety Act 2021. A social media service is in scope of the Act unless “none of the material on the service is accessible to, or delivered to, one or more end-users in Australia” (S.13(4)).  Thus any social media service in the world is within the reach of the Australian legislation, unless it can and does take steps that prevent all Australian users from accessing its content.  

However, the general territorial scope of the Act is not the end of the story. Under the Act the eSafety Commissioner can issue a removal notice in respect of ‘Class 1’ material if (among other things) the Commissioner is satisfied that the material can be accessed by end-users in Australia. A removal notice requires the service provider to “take all reasonable steps to ensure the removal of the material from the service”.  

Echoing S.13(4), S.12 provides that material is ‘removed’ if “the material is neither accessible to, nor delivered to, any of the end-users in Australia using the service.” (The court interpreted this as meaning all users physically located in Australia.)

The Commissioner sought continuation of the previously granted emergency injunction pending trial. The court therefore had to decide whether there was a real issue to be tried that the final injunction sought by the Commissioner would go further than the “reasonable steps” that were all that a removal notice could require.

X Corp had agreed to geoblock the 65 URLs specified in the removal notice, so that they are not accessible to users with IP addresses in Australia. The eSafety Commissioner sought an injunction that would require X Corp to remove the 65 URLs from its platform altogether, or make them inaccessible to all users. The Commissioner argued that such action was within the “all reasonable steps” that the removal notice required to be taken. X Corp argued that a requirement for worldwide removal or blocking of the material goes beyond what is “reasonable”.

The court held that although a voluntary decision by X Corp to remove the 65 URLs altogether would be reasonable (in the sense of easily justified), that was not the test where the Act imposes its requirements regardless of the wishes of providers and of individual users. “Reasonable” should therefore be understood as limiting what must be done to the steps that it is reasonable to expect or require the provider to undertake. Such steps include not only considerations of expense, technical difficulty and time for compliance, but (the issue that divided the parties) the other interests that are affected.

Significantly, when considering the other interests affected, the court brought into consideration the ‘comity of nations’. At an earlier point in the judgment Kennett J had said:

“The policy questions underlying the parties’ dispute are large. They have generated widespread and sometimes heated controversy. Apart from questions concerning freedom of expression in Australia, there is widespread alarm at the prospect of a decision by an official of a national government restricting access to controversial material on the internet by people all over the world. It has been said that if such capacity existed it might be used by a variety of regimes for a variety of purposes, not all of which would be benign. The task of the Court, at least at this stage of the analysis, is only to determine the legal meaning and effect of the removal notice. That is done by construing its language and the language of the Act under which it was issued. It is ultimately the words used by Parliament that determine how far the notice reaches.”

Nevertheless, when it came to consider reasonableness as a matter of construction of the language of the Act, something very like those considerations reappeared:

“49    If s 109 of the OS Act provided for a notice imposing such a requirement, it would clash with what is sometimes described as the “comity of nations” in a fundamental manner. …

50    If given the reach contended for by the Commissioner, the removal notice would govern (and subject to punitive consequences under Australian law) the activities of a foreign corporation in the United States (where X Corp’s corporate decision-making occurs) and every country where its servers are located; and it would likewise govern the relationships between that corporation and its users everywhere in the world.

The Commissioner, exercising her power under s 109, would be deciding what users of social media services throughout the world were allowed to see on those services. The content to which access may be denied by a removal notice is not limited to Australian content.

In so far as the notice prevented content being available to users in other parts of the world, at least in the circumstances of the present case, it would be a clear case of a national law purporting to apply to “persons or matters over which, according to the comity of nations, the jurisdiction properly belongs to some other sovereign or State”. Those “persons or matters” can be described as the relationships of a foreign corporation with users of its services who are outside (and have no connection with) Australia. What X Corp is to be permitted to show to users in a particular country is something that the “comity of nations” would ordinarily regard as the province of that country’s government.

51    The potential consequences for orderly and amicable relations between nations, if a notice with the breadth contended for were enforced, are obvious. Most likely, the notice would be ignored or disparaged in other countries. (The parties on this application tendered reports by experts on US law, who were agreed that a US court would not enforce any injunction granted in this case to require X Corp to take down the 65 URLs.)”

In similar vein the judge went on to consider the balance of convenience, in case he was wrong on the construction of the statute:

“56    If the considerations relating to the comity of nations (discussed at [48]–[51] above) had not led me to the view that the Commissioner has not made out a prima facie case, the same considerations would have led me to conclude that the balance of convenience does not favour extending the interlocutory injunction in its current (or any similar) form.

57    On the one hand the injunction, if complied with or enforced, has a literally global effect on the operations of X Corp, including operations that have no real connection with Australia or Australia’s interests. The interests of millions of people unconnected with the litigation would be affected. 

Justifying an interlocutory order with such a broad effect would in my view require strong prospects of success, strong evidence of a real likelihood of harm if the order is not made, and good reason to think it would be effective. At least the first and the third of these circumstances seem to be largely absent. The first is discussed above. 

As to the third, it is not in dispute that the stabbing video can currently be viewed on internet platforms other than X. I was informed that the video is harder to find on these platforms. The interim injunction is therefore not wholly pointless. However, removal of the stabbing video from X would not prevent people who want to see the video and have access to the internet from watching it.

58    On the other hand, there is uncontroversial expert evidence that a court in the US (where X Corp is based) would be highly unlikely to enforce a final injunction of the kind sought by the Commissioner; and it would seem to follow that the same is true of any interim injunction to similar effect. This is not in itself a reason why X Corp should not be held to account, but it suggests that an injunction is not a sensible way of doing that. Courts rightly hesitate to make orders that cannot be enforced, as it has the potential to bring the administration of justice into disrepute.”

A notable aspect of these passages is the approach to comity of nations, especially in the balance of convenience section which refers to the effect on millions of people unconnected with the litigation. It stands in significant contrast with the approach of the Canadian Supreme Court in Equustek (a trade mark and confidential information case).

The court in that case took an approach to comity that was both more abstract and more state-centric than that of Kennett J. It was abstract in that it was apparently sufficient that other countries would recognise the notion of intellectual property rights – without needing to consider the concrete question of whether the plaintiff in fact owned equivalent intellectual property rights throughout the world. It was more state-centric in that it focused entirely on the sensibilities of other states, without consideration of the individual interests and rights of users throughout the world.

Both differences are apparent from a passage in the British Columbia Court of Appeal judgment under appeal in Equustek, endorsed by the Canadian Supreme Court:

"In the case before us, there is no realistic assertion that the judge’s order will offend the sensibilities of any other nation. It has not been suggested that the order prohibiting the defendants from advertising wares that violate the intellectual property rights of the plaintiffs offends the core values of any nation." [BCCA 93]

The notion that international law is about more than mere state interests gains some support from the academic Jeremy Waldron. He has referred to:

‘the peaceful and ordered world that is sought in [international law] – a world in which violence is restrained or mitigated, a world in which travel, trade and cooperation are possible. . . . [This, he says, is] something sought not for the sake of national sovereigns themselves, but for the sake of the millions of men, women, communities, and businesses who are committed to their care’ [J. Waldron, ‘Are Sovereigns Entitled to the Benefit of the International Rule of Law?’ (2011) 22 European Journal of International Law 325.]

The Australian case is due to go forward to a full trial in July 2024. It has the potential to become a test of the circumstances in which courts will exercise jurisdictional self-restraint over the internet.