Wednesday, 6 May 2026

Britannia rule the internet

Coined in 1740, ‘Britannia! rule the waves’ was a dawn-of-Empire exhortation to assert British naval power worldwide. Today, the Law Commission exhorts Britannia (or England and Wales, to be exact) to rule the internet worldwide: 

In our view, contempt laws should apply to all material that is accessible in England and Wales.(Part 1 Report on Liability for Contempt of Court, November 2025 [4.173]) 

The proposed law in question ('contempt by publication') would apply to publication of material that created a substantial risk that the course of justice in active legal proceedings in England and Wales would be seriously impeded or prejudiced. Fault would be established where the defendant knew the proceedings were active or was aware of a risk that they were active.

The Report goes on:

Given that the risk to the interference with the interference of justice may arise when material is accessible, it is necessary and proportionate for the protection of fair trial rights that the law does not preclude liability on the grounds that the material was uploaded outside the jurisdiction. This broad approach lends itself to clarity, certainty and consistency in the law. Using a VPN or moving briefly into another jurisdiction should not be left open as methods for people ordinarily resident in England and Wales to attempt to avoid liability.” [4.173]

It is a defining feature of the internet that material is inherently accessible worldwide unless positive steps, such as geofencing, are taken to limit its accessibility. Accordingly the Law Commission’s proposal equates to default world-wide applicability. The Report explains further:

An approach that imposes liability regardless of where and how material has been made available ensures that enforcement is possible in the widest range of circumstances where potentially prejudicial material is available to an audience in England and Wales. Even where enforcement is impractical or not possible, this approach signals that the right to a fair trial is nevertheless important and should be protected.” [4.174]

As well as recommending world-wide online application, the Law Commission has proposed that a single set of liability rules should apply to both publishers and distributors. On the face of it the resulting regime could, de facto, oblige at least some platform operators to monitor for potentially prejudicial user posts if the operator is aware of proceedings being active or of a risk that they are active. However, this aspect of the proposals is not simple. It may be the subject of a future blogpost.

These are not the most auspicious times in which to be proposing world-wide applicability of local laws. The current US administration has, justifiably or not, loudly criticised the extraterritorial reach of European (including UK) online safety laws. None of those laws, it should be said, goes so far as to assert jurisdiction based on mere accessibility of the content.

How to approach cross-border liability on the internet has been a lively topic of academic discussion for many years. For instance in Solving the Internet Jurisdiction Puzzle (2017) Professor Dan Svantesson proposed, as an alternative to the traditional approach rooted in territorial sovereignty, a framework consisting of three core principles upon which exercising jurisdiction could be justified:

(1)   There is a substantial connection between the matter and the state seeking to exercise jurisdiction;

(2)  The state seeking to exercise jurisdiction has a legitimate interest in the matter; and

(3)  The exercise of jurisdiction is reasonable given the balance between the state’s legitimate interests and other interests.

For present purposes the significance of this suggested framework lies mainly in the third principle: it is not enough to consider jurisdictional reach only through the domestic lens of the state asserting jurisdiction. Other interests also have to be considered: typically those of states whose sovereignty may be affected and of persons overseas who may feel incentivised  or compelled to modify their conduct as a result of the assertion of jurisdiction, especially if that conduct is lawful in their own country.

In the traditional framework the starting point would be to consider whether the jurisdiction asserted is extraterritorial, and if so to consider whether it can be justified under one of the categories recognised by international law. Questions of reasonableness may arise as part of that assessment, or otherwise as a matter of comity: due respect for the sensitivities of other nation states.

Although principally a state-centric doctrine, comity encompasses the interests of non-state persons in other countries. In a 2024 Australian case concerning the extraterritorial reach of Australia’s online safety legislation, Kennett J. said:

In so far as the notice [given by the eSafety Commissioner] prevented content being available to users in other parts of the world, at least in the circumstances of the present case, it would be a clear case of a national law purporting to apply to “persons or matters over which, according to the comity of nations, the jurisdiction properly belongs to some other sovereign or State”. Those “persons or matters” can be described as the relationships of a foreign corporation with users of its services who are outside (and have no connection with) Australia. What X Corp is to be permitted to show to users in a particular country is something that the “comity of nations” would ordinarily regard as the province of that country’s government.” (eSafety Commissioner v X Corp [2024] FCA 499, [50])

Worldwide mere accessibility as the jurisdictional threshold is likely to raise comity issues, most obviously with countries that take a significantly less strict approach to commenting on court proceedings; perhaps all the more so if world-wide accessibility is combined with some variety of platform monitoring obligation. Conversely, since comity is founded on mutual respect, some countries might see a precedent for giving their own contempt laws worldwide effect.

Notably, in its response to the Law Commission's July 2024 consultation the Attorney General’s Office suggested that “expanding contempt jurisdiction to cover publication abroad for a foreign audience would breach principles of international comity”.  (Report [4.165])

How did the Law Commission arrive at its conclusions? Its work on contempt has a long history, stretching back to a previous Report on Contempt of Court: Juror Misconduct and Internet Publications, published in December 2013 following a 2012 consultation paper.  That Report included a defence for prior-published items, which could be disapplied by a notice given by the Attorney General. It was included in the next Criminal Justice and Courts Bill, but then dropped following opposition from, among others, the Society of Editors.

After a gap of nearly ten years the Law Commission commenced a review of contempt of court in 2022. That led to a Consultation Paper in July 2024, a supplementary Consultation Paper in March 2025, and the Report (Part 1) on Liability in November 2025. A further report dealing with remaining issues will be published this year.

The Law Commission’s starting point is that the current law is unclear about its territorial ambit. But exactly how the law should be clarified has been difficult to pin down from the outset. The 2012 Consultation Paper posited, as examples, three possible approaches to publication:

(a) production within England and Wales,

(b) targeting a section of the public in England and Wales, or

(c) mere accessibility in England and Wales.

The 2013 Report summarised a mixture of consultation responses and recommended that the issue be addressed in a future project on social media.

Some years later, the Law Commission considered territorial scope of a new harmful communications offence that it proposed in its 2021 Report on Modernising Communications Offences. It settled on a test of habitual residence. It made no specific extraterritoriality recommendation in its subsequent 2022 report on Intimate Image Abuse, inviting the government to consider whether the new offences proposed would benefit from specific extra-territorial statutory provision.

The July 2024 Contempt Consultation Paper provisionally recommended that territoriality should be clarified. It identified three principal options:

(a)  Place of publication irrelevant (i.e. mere accessibility).

(b)  Production or uploading in England and Wales.

(c)   As per (b), or production or uploading outside England and Wales by a person habitually resident in England and Wales, or by an organisation with a place of business within England and Wales.

So, compared with the 2012 Consultation Paper a decade earlier, in July 2024 the Law Commission in effect borrowed the habitual residence option from its previously proposed harmful communications offence and dropped the targeting option. The Law Commission’s view was now that the third option might be preferable, but it had not reached a firm provisional conclusion.

Given that in the broader online world targeting is often seen as a reasonable online jurisdictional compromise, it might be asked why the Law Commission dropped it as a provisional option. The Consultation Paper did not give specific reasons, other than noting that only three responses to the 2012 consultation had been in favour of targeting.

However, targeting then reappeared in the November 2025 final Report:

4.165 Where a more limited approach was favoured then it often took as the discriminating factor whether there was an intention that the publication would reach an audience in England and Wales.

For example, the University of Sheffield CFOM argued that liability should attach also “to any publication accessible in England and Wales, wherever it was produced or uploaded, and whoever produced it, if it was primarily targeted at a section of the public in those two nations”.

The Attorney General’s Office (AGO) said “the administration of justice in England and Wales should seek to protect itself against substantial interference, regardless of the place of publication” but also suggested liability should be limited to circumstances “where the publisher intended that the publication would be accessed by members of the public in England and Wales”.

Gavin Sutter favoured the approach of section 19(11) of the Online Safety Act 2023, which attaches liability to “UK-linked” publications. This refers to publications which have the UK as a target market or their sole target market, or where “the content is or is likely to be of interest to a significant number of UK users”.  

The opening sentence could be taken as suggesting that targeting is a matter of subjective intention. Similarly, the conclusion states:

4.173… A requirement to prove an intention to reach people in England and Wales would add complexity to the law. It may become increasingly challenging as technology evolves and new ways of masking one’s location or reaching audiences become available. The words “addressed to” in the existing definition do not necessarily import a requirement to prove intention, but a definition that does not rely on those words would avoid the risk of an inadvertent limitation.

However, a targeting test does not have to involve proof of subjective intention. There is a wealth of caselaw in, among other fields, intellectual property law (especially trade marks) that treats targeting as an objectively ascertainable matter. If the Report’s rejection of targeting was predicated on an assumption that subjective intention is a necessary component of a targeting test, that would be unfortunate. But we do not know.

Of course, given its stated policy reasons regarding the importance of a fair trial, the Law Commission might have opted for mere accessibility and worldwide application whatever its understanding of targeting.

The question of reasonableness from the perspective of foreign states and private persons is an undercurrent to the Law Commission's efforts to wrestle with the question of territorial reach since its first consultation paper in 2012. The 2012 paper started with the criminal law, noting that "the complexity in applying these principles of jurisdiction to crimes committed via the internet cannot be understated." As regards contempt under the existing 1981 Act, it observed that:

"There do not appear to be any reported cases of section 2 with a cross-frontier element. It is certainly possible to conceive of circumstances in which they might arise. For example, a US tourist might be murdered in England and Wales in such newsworthy circumstances as to be prominently featured on US news websites. These could be accessed in England and Wales and might give rise to a substantial risk of serious prejudice or impediment at trial. It is unclear whether liability for contempt might arise in such circumstances on the basis of the accessibility of the publication in England and Wales."

The 2012 consultation paper, like the Law Commission's subsequent publications, offered examples with varying degrees of connection to England and Wales. However, reasonableness from the perspective of foreign states and private persons remained an undercurrent, rather than being brought to the surface and analysed in terms of international law and comity.

Ultimately, the policy reasons that the Law Commission has finally relied upon are domestically focused. They do not go into the broader cross-border legal and geo-political aspects that a full discussion of international law and comity could have illuminated.

Such an analysis would have involved considering whether it is reasonable, from the perspective of the foreign state and its citizens, to impose ‘mere accessibility’ liability on persons in another country. It would require consideration of the position of a variety of potential actors: mainstream foreign press and media, individual bloggers and posters, and online platforms. 

Analysis of the question of reasonableness might not be the same for all actors. For instance, in 2006 the New York Times blocked UK access to an article on on its website, out of concern for possible breach of UK contempt laws.  Its spokesperson said:

"We're dealing with a country that, while it doesn't have a First Amendment, it does have a free press, and it's our position that we ought to respect that country's laws"

But what might be reasonable for the mainstream press might not be reasonable for an individual person.  Reasonableness could also be affected by the liability threshold proposed for each kind of actor.

Reasonableness arguments would not necessarily be all in one direction. For instance it might be suggested that broad extraterritoriality could in some circumstances be reasonable for an individual foreign user: that if you choose to discuss publicly proceedings that are self-evidently active in another country you knowingly take a risk as regards that country’s contempt laws. That could lead into human rights questions about the foreseeability of other country’s laws: an analysis that could differ as between individuals, businesses and professionals (Perrin v UK, ECtHR 18 October 2005).

Reasonableness could also be affected by technical ability to geofence specific material, collections of material or an entire site. Such ability could well be different for different kinds of actor.

However, the Report and its precursors do not go into the kind of detailed balancing of domestic against foreign interests implied either by Professor Svantesson's third principle or by comity.

These issues are certainly not simple. A balancing exercise might or might not result in a different conclusion for some or all kinds of actor. But such an exercise would require consideration of the issues from the foreign perspective. To look at the matter simply through a domestic lens does not do full justice to the broader comity and related issues surrounding cross-border internet liability.


Monday, 23 February 2026

Safety by design or systems for content moderation?

The Online Safety Act Network (OSAN) recently published a 10-point plan to amend the Online Safety Act. The plan includes:

“Insert a definition of safety by design into the Act to make clear to Ofcom and services what Parliament intended”.

From a technical drafting perspective clarification might be welcome. The Act says that it “seeks to secure that regulated services are safe by design”. That was added at the last minute in a clause describing the overall purpose of the legislation, but which lacked any definition of ‘safe’ or ‘safe by design’. I discussed here the undoubted difficulties in interpreting what is now Section 1 of the Act.

Of course, before we can craft a definition of safety by design we have to know what it is intended to mean. For myself, I have always regarded much of the theory underlying safety by design as fragile, at least within the context of the Online Safety Act. But putting those doubts on one side, I did think that I had a reasonable idea of what safety by design was intended to be about.

Now I am not so sure.

To recap, this is how I thought safety by design was meant to apply to regulation of online platforms:

  • Safety by design requires safety to be considered at the design stage, not as an afterthought.
  • Safety by design should then be applied iteratively via periodic risk assessments, incorporating feedback learned during operation of the service.
  • Safety by design focuses on platform systems and processes, identifying and addressing those that create or exacerbate a risk of harm (however defined).
  • Safety by design is not, or at least not primarily, about systems for content moderation.
  • Safety by design favours non-content-specific, systems-focused measures.
  • Safety by design is not about automated content detection and filtering.

Safety by design proponents have long criticised the Online Safety Act for being too content-focused. Rather than more and better content moderation, platforms should have to design safety into their systems and processes from the outset. This, so the theory goes, would result in less harm (however that might be conceptualised) occurring on platforms and less need for ex-post content moderation.

There have been variations on these themes: for instance, that a systems focus can include friction measures targeted at specific kinds of content, but which stop short of requiring removal; or that measures can focus on harm arising from certain kinds of content without focusing specifically on the content itself.

Nevertheless, as a general proposition I understood safety by design (a.k.a. ‘systems and processes’) to be about addressing from the outset the design of risk-creating system features, combined with a preference for non-content-related or content-agnostic measures over content-specific measures.

If that is right, safety by design has two elements. It articulates a general approach to safety but is also exclusionary: systems for content moderation (or at least automated filtering systems) are not a safety by design measure. The UK government, it should be said, has taken the opposite view. It regards automated content filtering as a safety by design measure. That is the most obvious difference of view that, if I am right in my understanding of safety by design, would have to be resolved in crafting a statutory definition.

To the extent that safety by design proponents embrace systems for content moderation, that has tended to be as a fall-back for where safety by design measures have not squeezed harm out of the system.

Thus Professor Lorna Woods’ October 2024 paper for OSAN, Safety by Design, although allowing for the possibility of ex post measures as a residual measure, differentiated that from a primary focus on design choices:

“At the moment, content moderation seems to be in tension with the design features that are influencing the creation of the content in the first place, making moderation a harder job. So a ‘by design’ approach is a necessary precondition for ensuring that other ex post responses have a chance of success.

While a “by design” approach is important, it is not sufficient on its own; there will be a need to keep reviewing design choices and updating them, as well as perhaps considering ex post measures to deal with residual issues that cannot be designed out, even if the incidence of such issues has been reduced.”

She distinguished safety by design from techno-solutionism:

“Designing for safety (or some other societal value) does not equate to techno-solutionism (or techno-optimism); the reliance on a “magic box” to solve society’s woes or provide a quick fix. Rather, what it acknowledges is that each technology may have weaknesses and disadvantages, as well as benefits. Further, the design may embody the social values and interests of its creators. A product (or some of its features) may be part of the problem. The objective of “safety by design” is – like product safety – to reduce the tendency of a given feature or service to create or exacerbate such issues.” (emphasis added)

One might think that automated content filtering is the paradigm example of regulatory techno-solutionism. Indeed, as to the Online Safety Act itself, Professor Woods noted its emphasis on systems for content moderation:

“What is rather more explicit in the [Online Safety Act] safety duties is the focus on filtering and moderation, which may have a design element (i.e. the tools are made available within the system and designed to work with the system) but seem more ex post in the way they work.”

Elsewhere Professor Woods has included reactive content take-down systems within safety by design, but as the “last port of call”. (Introducing the Systems Approach and the Statutory Duty of Care (chapter in Perspectives on Platform Regulation, Nomos, 2021).)

We can find other examples of safety by design proponents expressing concern about the Online Safety Act’s focus on systems for content moderation.

Carnegie UK was OSAN’s online safety policy predecessor and, through the work of Professor Woods and William Perrin, was the originator of the proposal for a statutory duty of care. Carnegie UK said in its June 2019 submission to the Online Harms White Paper consultation:

“Worryingly, there are references to proactive action in relation to a number of forms of content (and not just the very severe child sexual abuse and exploitation and terrorist content) which in the light of the emphasis in the codes could be taken to mean a requirement for upload filtering and general monitoring to support that.”

Demos’ submission to the draft Online Safety Bill Committee in September 2021 identified as a primary risk:

“A focus on regulation and moderation of content rather than platform systems which affect the risk of harm arising from that content” (emphasis in original)

and said:

“Although the Bill sets out a systems-based approach, there is a focus on reducing harm through content takedown measures, measuring the incidence of harms online and a focus on enforcing terms and conditions. ... we are concerned that in implementation this will turn into a ‘content-based approach’ by proxy, by prioritising the regulation of content moderation systems above other systems and design changes.

 Demos' April 2022 position paper on the Online Safety Bill argued that:

“The Bill treats a ‘systems’ approach as meaning a ‘systems for dealing with content’ approach…”

The Demos position paper also expressed particular concern about the “strong risk of infringing on either privacy or freedom of expression” in Ofcom’s ability to require use of proactive content moderation technology.

The 5Rights Foundation’s response to Ofcom’s final Illegal Harms Code of Practice in December 2024 said:

“The legislation has a clear objective that services are made “safe by design” but the majority of Ofcom’s proposed measures are not designed to prevent harm occurring in the first place – instead focusing on content moderation and reporting tools. While greater requirements on governance and accountability are welcome, this in itself will not ensure safety by design.”

If content-focused measures, or at least automated filtering, are not a variety of safety by design then a definition of safety by design for insertion in the Act could be expected to exclude measures of that kind; albeit how it could do so when the Act specifically contemplates the imposition of automated content detection and filtering is a conundrum. 

But since the government officially regards automated content filtering as a safety by design measure, it would seem highly unlikely that a definition contradicting that could find its way into the Act.

Ofcom’s Online Safety Act implementation

With the closing of Ofcom’s Summer 2025 consultation on additional safety measures, we can assess how far the Code of Practice measures recommended or proposed by Ofcom to date are – and are not – focused on systems for content moderation.

The consultation in fact provides a dual opportunity: to analyse Ofcom’s existing and proposed measures from a safety by design perspective, and to look at how safety by design proponents have reacted to Ofcom’s newest proposals for automated content filtering (in Ofcom terminology, ‘proactive technologies’).

Non-content, reactive content-related and proactive content-related How far have non-content safety by design principles found expression in Ofcom’s implementation of the Online Safety Act?

Regardless of whether there is overlap between systems measures and content moderation measures, we can still conceive of functionality-oriented measures that do not require the platform to make judgements about content, nor involve directly limiting dissemination of content at all. A friction measure such as a warning ‘Did you mean to post without reading the linked article?’ would be an example of such a non-content measure.

Thus we can break down the measures so far recommended or proposed by Ofcom into non-content and content-related. The latter can be further divided into reactive and proactive.

In total, across the Illegal Content, Protection of Children and draft Additional Measures Codes for U2U services, there are (on my reckoning) 73 non-content measures, 27 reactive content-related measures and 12 proactive content-related measures. For illegal content, most of the proactive measures are contained in the Additional Measures consultation and are based on content detection and filtering technology of various kinds.



However, a closer look at the 73 non-content measures reveals that 50 of them are administrative, procedural or information provision: appointing an accountable individual, preparing various written documents, training, complaints and appeals procedures, publishing user support materials and so on. Whilst those are aspects of wider systems design, non-content measures addressed to features and functionality are of more immediate interest.

That leaves 23 non-content measures: 11 in the Illegal Content codes, all of which relate to children (in two cases only partially), and 12 in the Protection of Children codes.



Most of the 23 non-content measures concern technical functionality of the platform. The measures are limited (as required by the Act) to UK users and relate to:

  • Implementing an age-assurance process (ICU B1, PCU B1)
  • Use of highly effective age assurance (HEAA) (PCU B2 to B7) (Age assurance does of course indirectly affect the content available to users who are not verified as over-18, as the result of content-related measures predicated on age assurance.)
  • Safety defaults for child users concerning connection lists, account recommendations and direct messaging (ICU F1)
  • Removal of five kinds of functionality from child-user livestreams (ICU F3)
  • Options for user account blocking, disabling comments (for child users, or in some circumstances all registered users) (ICU J1, ICU J2)
  • Enabling children to give negative feedback on content recommender systems (PCU E3)
  • Providing information to children, when they restrict content or interactions with other accounts, as to the effect of doing so and further options available (PCU F2)
  • Options for user blocking and muting, disabling comments (users not determined to be adults by use of HEAA) (PCU J1, PCU J2)
  • Positive consent to group chat invitations (users not determined to be adults by use of HEAA) (PCU J3)

These examples illustrate that non-content measures are feasible, albeit some of those measures are, at least in part, precursors to content-related measures. Most obviously, age assurance underpins not only some of the non-content measures listed above, but also measures about content that should be hidden from under-18s.

Generally, it is striking how many of Ofcom’s non-content functionality measures are concerned with denying functionality to, or to interactions with, under-18s.

As to content-based measures, the Additional Measures consultation marks a decided shift towards automated content detection. Should these be welcomed as a version of safety by design, deprecated as systems for content moderation, or regarded as a means of addressing residual issues that cannot be designed out?

Safety by design or ex-post? OSAN’s cross-cutting response to Ofcom’s Additional Measures consultation takes issue with Ofcom’s description of some content-related measures, including proactive technology, as being ‘safety by design’:

“While some of the proposed measures - including automated content moderation (para 1.51) and livestreaming (p27) - are framed by Ofcom as being “safer by design”, these are primarily about ex-post mitigations for harmful content (reporting content, or relying on user action after harm has occurred) or introducing a form of safety tech (proactive tech measures) rather than embedding safe design at the level of systems and processes. There is still no understanding of what good service redesign should look like to ensure a more holistic orientation towards safety.” (emphasis added)

However, OSAN’s companion detailed response to the Additional Measures Consultation characterises Ofcom’s proactive technology proposals as safety by design:

“We broadly support the move towards requiring proactive technology as a safety-by-design approach to user safety”.

The detailed response (but not the cross-cutting response) would therefore seem to endorse the government’s view of safety by design.

OSAN also suggested that Ofcom’s principles-based proactive technology proposals could be extended to include intimate image abuse.

Recommender systems The Demos Digital submission endorsed Ofcom’s proposed content-specific approach to recommender systems:

“The Demos Digital team agrees with Ofcom’s proposal to exclude illegal content from recommender systems until the content has been reviewed by content moderation teams.”

After pointing out that “Automated content identification tools are known to struggle with reliability and bias”, Demos Digital then suggested improvements including:

“Because of these risks of inconsistency, Ofcom should provide specific guidance for platforms’ responsible use of automated content identification tools, including: transparency reporting; quality control standards for automated identification systems, including bias, reliability and accuracy; impact assessments for evaluating the automated systems; and model parameters for identifying illegal content. We believe this would alleviate some of the risks of automated content identification systems – such as inconsistencies, inaccuracies, and bias – which could result in the over-exclusion of legal content, or under-exclusion of illegal content.”

At the level of principle it is difficult to see how this reflects a systems-based approach, other than in the sense of systems for moderating content.

Parenthetically, even if a tendency to bias could be alleviated, there is still the insoluble problem that automated content identification tools do not have access to off-platform contextual information that can affect legality of the user content in question.

In its comments on recommender systems OSAN supports limitations on the reach of “content that is harmful in nature”, if accompanied by freedom of expression safeguards such as explanations of how the systems work in practice, and notification of creators when their content is affected so as to allow them to use complaints and appeals processes.

Live-streaming For live-streaming, OSAN has suggested some concrete ways in which Ofcom’s proposed Additional Measures could go further: building in a delay to livestreaming and turning off livestreaming by default for under-18s or under-16s. It describes these as safety by design measures:

“15. Ofcom’s proposals focus on responding to harm after it occurs and content moderation rather than preventing it in the first place. There is no requirement for live-feed delays, which are standard practice in traditional broadcasting, to prevent harmful or illegal content from being aired in real time. Safety-by-design means including proactive measures such as time-delay buffers and real-time risk assessment. There is plenty of guidance available to broadcasters on this topic.” (emphasis added)

However, it then describes them as ex-post measures:

“17. More broadly, we would recommend that Ofcom consider a greater array of ex-post features - e.g. borrowing from broadcasting good practice and building more delay into a live stream as a feature.” (emphasis added)

Is time delay an example of safety by design or an ex-post feature? The distinction would not necessarily matter much, were it not for the fact that a statutory definition of safety by design is proposed. But either way, although a time delay is of itself a non-content measure, its purpose is to enable the platform to make judgements about the content being live-streamed and (if thought necessary) to shut down the stream. OSA describes that as real-time risk assessment. In the context of the Act, those would have to be judgements about illegality or (for child-accessible streams) content harmful to children. 

For children, OSAN contemplates a non-content-related measure: turning live-streaming off by default for children, whether under-16 or under-18. It also observes that “A strong understanding of safety-by-design would mean that where livestreaming cannot be delivered safely it shouldn’t be in place.”

Finally, OSAN cites Ofcom’s proposed limitation on livestream screen capture and recording for under-18s (part of ICU F3) as an example of friction.

Safety by design in context

As implementation of the Online Safety Act has progressed, it is perhaps not surprising if it has become less clear how safety by design should translate into concrete measures. The theory of online safety by design, founded on the notion of risk-creating features, was formulated in the context of a range of services and harms that differed greatly from those in scope of the Online Safety Act. The range of services within the Act is far broader and the kinds of harm are much more specific.

In July 2018 Woods and Perrin, working with Carnegie UK, proposed a:

“Virtuous circle of harm reduction on social media. Repeat this cycle in perpetuity or until behaviours have fundamentally changed and harm is designed out.” (Harm Reduction in Social Media, 17 July 2018

As to kinds of services, the proposal was aimed at around 10 social media companies each with over 1 million users. By January 2019, after discussion with various stakeholders, the authors had decided to extend the proposal to cover ‘social media and other internet platforms’ regardless of size. Now the Act covers an estimated 25,000 UK services (100,000 or more worldwide), 80% of which are micro-businesses (less than 10 employees). 

On the face of it the underlying premise of the harm reduction cycle seems to be that what a user does on a platform is primarily the result of its design. However, the authors of the proposal say that their argument is not that we are 'pathetic dots' in the face of engineered determinism, but that the architecture of the platform nudges us towards certain behaviour (Woods and Perrin, Online harm reduction - a statutory duty of care and a regulator, April 2019.) 

Even if it can be said that algorithmically driven social media platforms nudge us towards certain behaviour, how would that apply outside that specific milieu, for instance to plain vanilla discussion forums? And if, even on those large social media platforms, design only nudges rather than determines user behaviour, how far can harm really be designed out of the system?

As to kinds of harms, the safety by design theory is premised on platforms being risk creators. We always then have to ask, risk of what? In the context of the Online Safety Act that means connecting a given feature to a created or exacerbated risk of one of the specific kinds of criminality in scope of the Act, or of specific kinds of content harmful to children.

Within the context of the Act, the theory has never been easy to render into concrete expression:

  • If the idea is that a user’s decision to post, say, an illegal offer to ferry illegal immigrants across the Channel is down to the design of the platform, that seems implausible.
  • If the idea is that platform design can prevent such content being encountered, but without descending into content moderation and filtering, how is that to be done? Similarly if the concern is to prevent specific kinds of content being repeated or stimulated.
  • If it means that recommender algorithms could be designed in ways that lessen the likelihood of their disseminating illegal content, it would have to be explained how that can be achieved without trespassing into content filtering.
  • If the idea is that platform functionality can be designed to make it harder or slower to post, share or comment on user content generally, or to impose volume limits (a ‘circuit-breaker’), that would fit the theory. However, that kind of friction measure would necessarily strike against desirable and undesirable content alike, raising human rights proportionality issues.
  • If the idea is that some functionalities should be banned, that would fit a version of the theory that holds that some functionalities cannot be designed safely. But the more general purpose the functionality in question, the greater the impact on legitimate content and the greater the human rights challenge.
  • If the idea is that harm to children can be prevented by platform design which, for instance, reduces opportunities for adults to contact children, that would fit the theory.

If no connection can be found between a given technical or business model feature of a platform and a risk of a user deciding to behave illegally in a particular way, then the regulator will look somewhere other than those design features to counter illegality: to other design features or, failing that, to systems for moderation. 

Professor Woods has suggested that designers should ask themselves: ‘What happens when the bad people get hold of this feature?’ (Introducing the Systems Approach and the Statutory Duty of Care, ibid.) However, that question could be asked of any general purpose functionality, risk-creating or not. On the face of it the question is about possible uses, not whether the feature in question creates or exacerbates a risk of a particular illegal or harmful use. It could be asked of the very act of providing a forum to which users can post. If we are not careful, we rapidly fall into the trap of characterising speech as a risk, not a fundamental right.

It is telling that Ofcom adopted that same approach in its statutory Risk Register: rather than attempt to identify functionalities that inherently create or exacerbate risk of illegality or content harmful to children, it sought to identify features that are used by malefactors as well as by law-abiding users: correlation rather than causation. That led it to list as risk factors general purpose functionality such as the ability to create hyperlinks.

If safety by design turns out to be a poor fit with much of the Online Safety Act, it should be acknowledged that the originators of the safety by design theory never wanted illegality to be the touchstone in the first place. Professor Woods said:

“These categories of harm should be identified by reference to their impact on the victim, not by reference to whether the speech might be considered illegal or not.” (Introducing the Systems Approach and the Statutory Duty of Care, ibid.)

That risks a leap from the frying pan (attributing risk of illegal behaviour to a platform feature) into the fire (pursuing nebulous and subjective kinds of harm). That aside, it would be no surprise if the theory turns out not to map easily on to the Act. It is one thing to say that, for instance, chasing ‘Likes’ trains users to produce ‘response-creating content’ (Introducing the Systems Approach and the Statutory Duty of Care, ibid). It is something else to show that a feature creates a risk of a user committing a specific criminal offence.

It may not be fanciful to think that something has got lost along the way from the 10 or so large social media platforms that the Carnegie UK authors had in mind for their original 2018 proposals, to the broad variety of 100,000 UK and overseas services in scope of the Online Safety Act. If, in essence, the theory was always really about large social media companies, their curation and engagement algorithms and their data-driven business models, it would not be a shock to find that it turns out to have little or no application beyond that.

For platforms where user agency is the predominant factor, and design decisions cannot realistically be regarded as likely to increase or decrease the likelihood of illegality or relevant content harm, logic would suggest that issues that cannot be designed out would most likely be at the forefront, not residual. A fruitless quest for specific illegality- or harm-inducing features could then easily result in a theoretical focus on systems and processes lapsing into systems for content moderation, thence to proactive content filtering technologies.

As to a statutory definition of safety by design, if systems for content moderation, including automated content filtering, are now to some extent embraced as an aspect of safety by design, it is difficult to see how a corresponding statutory definition could place meaningful limits on the kinds of concrete measures contemplated. It would also seem to have moved a very long way from the original conception of safety by design. 

If the reality is that we do not have a clear idea of how safety by design is meant to translate into concrete regulatory measures within the context of the Act, that would not be a good starting point for crafting a statutory definition.

The alternative, of course, is that I have always had safety by design wrong and that Parliament knew exactly what it intended in Section 1. If so, mea culpa.


Wednesday, 11 February 2026

Extraterritoriality and the transatlantic free speech wars

The transatlantic free speech wars continue to rage. The US House Judiciary Committee was in action again last week, taking aim at the European Commission (who rejected its latest interim report as ‘pure nonsense’) and provoking EU civil society groups in the process.

The US administration, for its part, fired off its most recent salvo shortly before Christmas last year, when US Secretary of State Marco Rubio added five people to a list of individuals who would not be allowed visas, due to their activities in the 'global censorship-industrial complex'.

The US rogues' gallery included former EU Commissioner Thierry Breton, whose letter to Elon Musk in August 2024, referencing the Digital Services Act, scuppered Breton's prospects of a job in the 2024-2029 European Commission. The Commission have been trying to live down the letter ever since. US critics of the DSA have never let them forget it.

Notably, or perhaps prudently, the no-visa list included no current foreign state officeholders or functionaries. It did not go as far as when the US imposed visa restrictions on the Brazilian Supreme Court judge Alexandre de Moraes in July 2025. The implied threat, however, remains: "The State Department stands ready and willing to expand today's list if other foreign actors do not reverse course."

The next, heavily trailed, US counterstrike may be legislative: a federal ‘GRANITE Act’ Bill. We await to see if such a Bill materialises, and if so what it consists of. A state-level GRANITE Act was introduced into the Wyoming legislature yesterday.

If a federal Bill were framed along the lines of the 2010 SPEECH Act (aimed at libel forum-shopping) it would act as a shield, explicitly preventing enforcement of foreign regulatory and similar orders within the USA. A more radical (and controversial) step would be if it contained a sword: a cause of action on which aggrieved plaintiffs could claim damages in the US courts. The most controversial step would be if that were accompanied by amendment of the US Foreign Sovereign Immunities Act to enable foreign regulators such as Ofcom to be sued, either in the federal courts or under state legislation such as the Wyoming Bill.

Sovereignty exercised or violated?

What exactly is the US administration aggrieved about? Secretary of State Rubio's social media announcement referred to:

"egregious acts of extraterritorial censorship" by “ideologues in Europe [who] have led organized efforts to coerce American platforms to punish American viewpoints they oppose.”

The official State Department statement added:

“These radical activists and weaponized NGOs have advanced censorship crackdowns by foreign states—in each case targeting American speakers and American companies.”

It went on:

“President Trump has been clear that his America First foreign policy rejects violations of American sovereignty. Extraterritorial overreach by foreign censors targeting American speech is no exception.”

Stripped of the rhetoric, this is at least in part an accusation that the EU and UK, implicitly breaching international law on territorial sovereignty, have overreached in asserting their local regulatory regimes across the Atlantic.

The European Commission's response to the US visa bans asserted the EU's own:

"sovereign right to regulate economic activity in line with our democratic values and international commitments .... If needed, we will respond swiftly and decisively to defend our regulatory autonomy against unjustified measures."

The reported UK response was more anodyne:

"While every country has the right to set its own visa rules, we support the laws and institutions which are working to keep the internet free from the most harmful content."

Neither response directly addressed the US’s complaint about extraterritoriality. Since Secretary of State Rubio’s announcement did not identify any specific foreign state act that had prompted the visa sanctions, that was perhaps unsurprising. Moreover the visa restrictions were aimed, Thierry Bretton apart, at private persons who had not held public office. As against them, the US complaint was of ‘advancing’ censorship crackdowns by foreign states.

The French Foreign Minister, for his part, claimed that the DSA:

“…has absolutely no extraterritorial reach and in no way affects the United States.” (Jean-Noël Barrot, tweet, 23 Dec 2025)

Taking sides

What should a dispassionate legal observer make of all this? Some, no doubt, will be tempted just to plump for one side or the other, motivated by partisan preference for the EU, UK or US approach to governing speech and online platforms, by broader political affinities, or by views on the propriety or otherwise of deploying visa sanctions for this kind of purpose.

Tempting as that may be, simply to declare 'four legs good, two legs bad' will not do when it comes to considering international law rules and extraterritoriality. Taking sides based purely on a preference for the Digital Services Act or the Online Safety Act over the US First Amendment, or vice versa, does not address the underlying legal issue: how, in the inherently cross-border online world, to go about drawing boundaries - or at least minimise friction - between different national or regional legal systems. A more analytical approach is called for.

Prescriptive versus enforcement jurisdiction

For that we have first to distinguish between prescriptive and enforcement jurisdiction. Prescriptive jurisdiction is the territorial ambit of legislation: how far, and on what basis, does it claim to apply to persons or conduct outside its borders? Enforcement jurisdiction, on the other hand, is about concrete exercise of powers by a state authority. In the case of the Online Safety Act that authority is the designated regulator, Ofcom.

Where extraterritoriality is concerned, international law gives more leeway to prescriptive than to enforcement jurisdiction. That is because the state’s conduct in legislating is merely assertive. Although laws are an expression of state power, writing something into a state’s own legislation does not of itself involve conduct on the territory of another state.

Typically it is enforcement that causes problems, both in its own right - steps that the authorities have taken, especially cross-border, to enforce against a foreign person - and in the light that enforcement shines on the prescriptive territorial reach of the substantive legislation.

The US complaint – prescriptive or enforcement?

Did the US complaint concern prescriptive or enforcement jurisdiction? An interview given by Under-Secretary of State Sarah B. Rogers to the Liz Truss Show before Christmas put a little more flesh on the bones as far as the Online Safety Act is concerned. She suggested that European, UK and other governments abroad were trying to nullify the American First Amendment and that: 

"when British regulators decree that British law applies to American speech on American sites on American soil with no connection to Britain, then we're kind of forced to have this conversation." 

She went on:

"The position that Ofcom has taken in the 4Chan litigation is essentially that I, an American, could go set up a website in my garage, it could be Sarah's hobby forum, it could be all about America, it could be all about the 4th of July or whatever. It could have no employees in Britain, no buildings in Britain, my speech wouldn't even need to reach into Britain. I'm not posting about the Queen or anything, I'm posting about American concepts, American political controversies. Ofcom's legal position nonetheless is that if I run afoul of British content laws, then I have to pay money to the British government. When that happens, I think you should expect a response from the American government, and I expect to see one shortly." 

On the face of it Rogers’ concern is about the substantive territorial ambit of the OSA: in other words, prescriptive jurisdiction. 

Prescriptive jurisdiction

International law recognises various grounds on which extraterritorial prescriptive jurisdiction can be regarded as justified, some of them potentially very broad. These tend to reflect a broader principle that there must be sufficient connection between the person or conduct and the state asserting jurisdiction to justify the extraterritoriality in question. The more tenuous the connection and the greater the cross-border reach, the more exorbitant the claim to jurisdiction and the less likely that the extraterritoriality can be justified. 

That is the theory. In practice, the customary norms of international law tend to be distinctly malleable and, when push comes to shove, to merge into geopolitics.

Enforcement jurisdiction

In contrast, for exercise of investigative or enforcement jurisdiction, the traditional view is that nothing less than consent of the target state will do. Unlike for prescriptive jurisdiction, there is no balancing exercise to justify the degree of extraterritoriality of the asserted jurisdiction. The focus is entirely on the conduct of the state and whether it is an incursion on the territorial sovereignty of the target state.

However, this principle has come under strain. When electronic communication and the internet enable state authorities to act remotely without setting foot in another state’s territory or sending a physical document across the border, does that violate another state’s territorial sovereignty? Should, as for prescriptive jurisdiction, other factors come into play that could justify the state's conduct?

For one answer we can go back to 1648 and the Peace of Westphalia. This was the birth of the modern nation state, in which each state has exclusive sovereignty over its own territory. The corollary of that principle is an aversion to projection of state power into another state's territory: most obviously, sending troops across the border.

That, however, is not the only way of violating a state's sovereignty. Enforcement actions such as serving a court order or an arrest warrant within a foreign state's territory also project state power across the border and are considered to require the consent of the nation state concerned:

"Persons may not be arrested, a summons may not be served, police or tax investigations may not be mounted, and orders for production of documents may not be executed on the territory of another state, except under the terms of a treaty or other consent given." (Brownlie's Principles of Public International Law (9th edn) J. Crawford, Oxford, 2019. p.462)

That is why there is a proliferation of international treaties dealing with issues such as cross-border service of legal proceedings, assistance from overseas authorities in obtaining evidence for criminal prosecutions (MLAT) or, more recently, enabling direct service of information requests on foreign telecommunications operators. 

Enforcement jurisdiction and regulators

A requirement for consent of the target state creates a potential problem for regulators, whose procedures are highly bureaucratic: inevitably so since considerations of due process and fundamental rights will require them to give enforcement targets full and fair notice of their proposed and actual decisions. They are also often given powers to serve mandatory demands for information, backed up by sanctions (sometimes criminal offences, sometimes civil penalties).

On what basis can a regulator send such official documents across borders without impinging on the sovereignty of the target state? The answer is not immediately obvious, especially since the activities of regulators do not necessarily fall into simple categories of civil or criminal upon which international treaties regarding service of legal documents tend to be founded.

A typical solution to the territorial sovereignty problem is to enlist the assistance of the relevant authorities in the target state. If such assistance is not covered by a multinational or bilateral treaty, a regulator might come to an arrangement such as a memorandum of understanding between agencies in a group of states.

The 2020 multilateral Competition Authorities Mutual Assistance Framework model agreement, for instance, envisages that requests for voluntary provision of information could be made by a direct approach to persons in another territory. For mandatory process the route is via the authorities in the other country.

However, courts have sometimes held that serving a cross-border notice is not like trespassing on the territory of the receiving state. In the UK the Court of Appeal in Jimenez considered an HMRC taxpayer information notice (with the potential sanction of civil, but not criminal, financial penalties) served by post on someone in Dubai, in order to check his UK tax position. Jimenez argued that sending the notice was contrary to international law, as it would:

“offend state sovereignty by violating the principle that a state must not enforce its laws on the territory of another state without that other state’s consent.”

Leggatt LJ (as he then was) said:

“I do not accept that sending a notice by post to a person in a foreign state requiring him to produce information that is reasonably required for the purpose of checking his tax position in the UK violates the principle of state sovereignty. Such a measure does not involve the performance of any official act within the territory of another state – as would, for example, sending an officer of Revenue and Customs to enter the person’s business premises in a foreign state and inspect business documents that are on the premises…”.

The Jimenez decision postdated the current edition of Brownlie quoted above. In KBR the UK Supreme Court emphasised that Jimenez concerned civil, not criminal, penalties. 

All this is not to say that a domestic statute can never expressly grant powers to take steps that, as a matter of enforcement jurisdiction, could go further than envisaged by international law. A UK statute may indeed do that, but in the expectation that the powers will be used with restraint, in a way that does not offend the sensibilities of another nation state (a.k.a. comity).

This passage from the Court of Appeal judgment in Competition and Markets Authority v Volkswagen and BMW, a case upholding CMA information notices served on German companies, is illuminating:

“All competition authorities worldwide face the same conundrum. Their statutory duty is to preserve the integrity of their domestic markets and protect consumers; yet, to perform that task regulators frequently have to focus their fire power upon actors located abroad where, if they seek enforcement, they might confront a variety of legal and practical problems. … How do legislatures square the circle? They achieve this by conferring broad extraterritorial regulatory and investigatory powers which can be exercised in undiluted form within their territorial jurisdictions, but which are exercised with circumspection and pragmatism when dealing with undertakings physically located elsewhere.”

The judgment went on to discuss comity:

“The creation of a power to be exercised with comity in mind … is, in our judgment, an eminently apt device to enable regulators to address, flexibly, issues of comity if and when they arise. [Counsel] for the CMA explained how comity worked. She acknowledged candidly that, notwithstanding the existence of broad investigatory powers, it was ‘out of the question’ that the CMA would for instance ever seek to conduct an on the spot investigation (a dawn raid) at the premises of an undertaking physically located outside the jurisdiction. Equally, she did not shirk from acknowledging that there could be difficulties in the exercise of mandatory powers of enforcement or sanction against a foreign undertaking which failed to comply with a statutory request for information. Such practical difficulties were simply the stuff of a regulator’s life.” 

Thus from a comity perspective a national regulator seeking to take enforcement steps against a foreign person may decide to tread carefully, especially where the subject matter may touch on particular sensitivities of the target state, even if the domestic legislation gives it power to act across borders. A combination of ambitiously extraterritorial prescriptive jurisdiction and broad investigatory and enforcement powers has the potential to become a combustible mixture.

Online Safety Act – prescriptive jurisdiction

With that background out of the way, how do Under-Secretary Rogers’ comments stack up?

In terms of its overall ambit, the UK Online Safety Act is extraterritorial but does not go as far as the mere accessibility position taken by Australian online safety legislation. The Australian Online Safety Act 2021 baldly asserts that a social media service is in scope of the Act unless “none of the material on the service is accessible to, or delivered to, one or more end-users in Australia”.

That legislation gave rise to civil litigation for an injunction brought by the eSafety Commissioner in the Australian courts, arguing that X should be required to take down certain videos worldwide and that geofencing to exclude Australia was insufficient. The regulator lost. 

The UK OSA sets out three grounds on which a service can be regarded as ‘UK-linked’ and so be a regulated service within scope of the Act. In the US litigation brought by 4Chan, Ofcom relies on two of those grounds: first, a significant number of UK users (based on statistics gleaned from 4Chan's website), and second the UK as a target market of the site (based on 4Chan seeking advertisers by reference to the percentage of UK users stated on its website).

Ofcom's position is thus that a sufficient UK connection (as stipulated in the OSA) exists in order for the OSA to apply to 4Chan. Ofcom did not (and could not) assert that the OSA safety duties apply to a site regardless of whether it has any UK connection.

How, then, should we interpret Under-Secretary Rogers' comment: “…when British regulators decree that British law applies to American speech on American sites on American soil with no connection to Britain”? That must presumably reflect a view of what should constitute a UK connection that is at odds with the OSA's criteria, or perhaps with Ofcom's interpretation of those criteria. It cannot, however, be argued that the OSA contains no UK connection criteria at all.

As to compliance with international law, the UK government would no doubt argue that as a matter of prescriptive jurisdiction the criteria for UK links stipulated in the OSA provide a sufficiently close connection with the UK to justify bringing a foreign service provider into scope, and are not exorbitant. It would also no doubt point to the fact that the substantive measures that can be required of an in-scope provider apply only to UK users of the service.

The Online Safety Act UK links criteria

The UK links set out in the OSA vary in the closeness of the stipulated UK connection. Some of them could be regarded as overreaching.  

The first ground - a 'significant' number of UK users - suffers from the vagueness of the term 'significant'. The Act does not elaborate on what that might mean and Ofcom has avoided specifics in its published guidance.

Speaking for myself, I have long proposed that extraterritorial jurisdiction on the internet should depend on whether a foreign site has engaged in positive conduct towards the jurisdiction. On that basis a self-contained test based only on number or proportion of users is potentially problematic: users may come to a site in numbers without the site operator ever having engaged in any positive conduct towards the country in which they are located. (This criticism can be applied to the DSA as well as the OSA).

The second ground - the UK as a target market - is reasonably conventional, if interpreted so as to reflect a requirement for positive conduct. Directing and targeting of activities has long been thought to be an appropriate ground on which to assert jurisdiction over internet actors.

The third ground - 'material risk of significant [physical or psychological] harm " is the most far-reaching and comes closest to a ‘mere accessibility’ test.

So far as prescriptive jurisdiction is concerned then, the OSA does stretch the limits of extraterritoriality, but – unless one were to take the position that a site is connected only to the country of its location - does not purport to apply to sites regardless of whether they have any connection to the UK.

Online Safety Act – enforcement jurisdiction

Although Under-Secretary Rogers’ comments are framed in terms of the OSA’s prescriptive jurisdiction, the point that 4Chan has emphasised in its US litigation concerns Ofcom’s exercise of its enforcement jurisdiction – serving a series of documents, including a mandatory information request under Section 100 of the OSA, directly on 4Chan by email.

As already noted, a regulator such as Ofcom proceeds against a service provider by way of a series of official notices. These present no jurisdictional problems if they can be served within the UK, but can Ofcom serve a notice on a foreign operator without violating the territorial sovereignty of its host country? As a matter of UK domestic law the OSA provides a variety of methods of service, including cross-border service by post and service by email.

Some might suggest that that, as a matter of international law, is an impermissible exercise of enforcement or investigatory jurisdiction unless done with the consent of the USA. But as Jimenez illustrates, a UK court would not necessarily agree; and in any case, if the words of the domestic statute are sufficiently clear to rebut any interpretative presumption against extraterritoriality, a UK court will give effect to them. 

Consequences

The consequences of exceeding acceptable limits of extraterritoriality may vary widely, depending on how exorbitant is the exercise of jurisdiction and how sensitive is the subject matter. They range from no response (often the case for merely prescriptive jurisdiction), to diplomatic, to refusal by foreign courts to recognise or enforce, to enactment of various kinds of blocking legislation.

One example of the latter was the US SPEECH Act 2010, which prevents enforcement of certain foreign libel judgments in the US courts and enables US persons to start proceedings for a declaration of non-enforceability in the US courts.

Another was the UK Protection of Trading Interests Act 1980. This was a response to a long period of US anti-trust legislation being enforced against conduct outside the USA by non-US companies. Years of diplomatic activity had failed to resolve the conflictwhich had become more acute in the late 1970s.

As already mentioned, a state-level GRANITE Act has been introduced as a Bill into the Wyoming legislature. It is both a shield and a sword, albeit that the sword would be dependent on a federal amendment to the Foreign Sovereign Immunities Act. We await to see if a federal GRANITE Act will materialise. 

The Court of Appeal in the BMW case noted that it had been said that antitrust law was the best illustration of the problem of national public interest risking conflicting with issues of international sovereignty. Speech on the internet – today, online safety in particular - is bidding fair to seize that mantle.