Wednesday, 11 February 2026

Extraterritoriality and the transatlantic free speech wars

The transatlantic free speech wars continue to rage. The US House Judiciary Committee was in action again last week, taking aim at the European Commission (who rejected its latest interim report as ‘pure nonsense’) and provoking EU civil society groups in the process.

The US administration, for its part, fired off its most recent salvo shortly before Christmas last year, when US Secretary of State Marco Rubio added five people to a list of individuals who would not be allowed visas, due to their activities in the 'global censorship-industrial complex'.

The US rogues' gallery included former EU Commissioner Thierry Breton, whose letter to Elon Musk in August 2024, referencing the Digital Services Act, scuppered Breton's prospects of a job in the 2024-2029 European Commission. The Commission have been trying to live down the letter ever since. US critics of the DSA have never let them forget it.

Notably, or perhaps prudently, the no-visa list included no current foreign state officeholders or functionaries. It did not go as far as when the US imposed visa restrictions on the Brazilian Supreme Court judge Alexandre de Moraes in July 2025. The implied threat, however, remains: "The State Department stands ready and willing to expand today's list if other foreign actors do not reverse course."

The next, heavily trailed, US counterstrike may be legislative: a federal ‘GRANITE Act’ Bill. We await to see if such a Bill materialises, and if so what it consists of. A state-level GRANITE Act was introduced into the Wyoming legislature yesterday.

If a federal Bill were framed along the lines of the 2010 SPEECH Act (aimed at libel forum-shopping) it would act as a shield, explicitly preventing enforcement of foreign regulatory and similar orders within the USA. A more radical (and controversial) step would be if it contained a sword: a cause of action on which aggrieved plaintiffs could claim damages in the US courts. The most controversial step would be if that were accompanied by amendment of the US Foreign Sovereign Immunities Act to enable foreign regulators such as Ofcom to be sued, either in the federal courts or under state legislation such as the Wyoming Bill.

Sovereignty exercised or violated?

What exactly is the US administration aggrieved about? Secretary of State Rubio's social media announcement referred to:

"egregious acts of extraterritorial censorship" by “ideologues in Europe [who] have led organized efforts to coerce American platforms to punish American viewpoints they oppose.”

The official State Department statement added:

“These radical activists and weaponized NGOs have advanced censorship crackdowns by foreign states—in each case targeting American speakers and American companies.”

It went on:

“President Trump has been clear that his America First foreign policy rejects violations of American sovereignty. Extraterritorial overreach by foreign censors targeting American speech is no exception.”

Stripped of the rhetoric, this is at least in part an accusation that the EU and UK, implicitly breaching international law on territorial sovereignty, have overreached in asserting their local regulatory regimes across the Atlantic.

The European Commission's response to the US visa bans asserted the EU's own:

"sovereign right to regulate economic activity in line with our democratic values and international commitments .... If needed, we will respond swiftly and decisively to defend our regulatory autonomy against unjustified measures."

The reported UK response was more anodyne:

"While every country has the right to set its own visa rules, we support the laws and institutions which are working to keep the internet free from the most harmful content."

Neither response directly addressed the US’s complaint about extraterritoriality. Since Secretary of State Rubio’s announcement did not identify any specific foreign state act that had prompted the visa sanctions, that was perhaps unsurprising. Moreover the visa restrictions were aimed, Thierry Bretton apart, at private persons who had not held public office. As against them, the US complaint was of ‘advancing’ censorship crackdowns by foreign states.

The French Foreign Minister, for his part, claimed that the DSA:

“…has absolutely no extraterritorial reach and in no way affects the United States.” (Jean-Noël Barrot, tweet, 23 Dec 2025)

Taking sides

What should a dispassionate legal observer make of all this? Some, no doubt, will be tempted just to plump for one side or the other, motivated by partisan preference for the EU, UK or US approach to governing speech and online platforms, by broader political affinities, or by views on the propriety or otherwise of deploying visa sanctions for this kind of purpose.

Tempting as that may be, simply to declare 'four legs good, two legs bad' will not do when it comes to considering international law rules and extraterritoriality. Taking sides based purely on a preference for the Digital Services Act or the Online Safety Act over the US First Amendment, or vice versa, does not address the underlying legal issue: how, in the inherently cross-border online world, to go about drawing boundaries - or at least minimise friction - between different national or regional legal systems. A more analytical approach is called for.

Prescriptive versus enforcement jurisdiction

For that we have first to distinguish between prescriptive and enforcement jurisdiction. Prescriptive jurisdiction is the territorial ambit of legislation: how far, and on what basis, does it claim to apply to persons or conduct outside its borders? Enforcement jurisdiction, on the other hand, is about concrete exercise of powers by a state authority. In the case of the Online Safety Act that authority is the designated regulator, Ofcom.

Where extraterritoriality is concerned, international law gives more leeway to prescriptive than to enforcement jurisdiction. That is because the state’s conduct in legislating is merely assertive. Although laws are an expression of state power, writing something into a state’s own legislation does not of itself involve conduct on the territory of another state.

Typically it is enforcement that causes problems, both in its own right - steps that the authorities have taken, especially cross-border, to enforce against a foreign person - and in the light that enforcement shines on the prescriptive territorial reach of the substantive legislation.

The US complaint – prescriptive or enforcement?

Did the US complaint concern prescriptive or enforcement jurisdiction? An interview given by Under-Secretary of State Sarah B. Rogers to the Liz Truss Show before Christmas put a little more flesh on the bones as far as the Online Safety Act is concerned. She suggested that European, UK and other governments abroad were trying to nullify the American First Amendment and that: 

"when British regulators decree that British law applies to American speech on American sites on American soil with no connection to Britain, then we're kind of forced to have this conversation." 

She went on:

"The position that Ofcom has taken in the 4Chan litigation is essentially that I, an American, could go set up a website in my garage, it could be Sarah's hobby forum, it could be all about America, it could be all about the 4th of July or whatever. It could have no employees in Britain, no buildings in Britain, my speech wouldn't even need to reach into Britain. I'm not posting about the Queen or anything, I'm posting about American concepts, American political controversies. Ofcom's legal position nonetheless is that if I run afoul of British content laws, then I have to pay money to the British government. When that happens, I think you should expect a response from the American government, and I expect to see one shortly." 

On the face of it Rogers’ concern is about the substantive territorial ambit of the OSA: in other words, prescriptive jurisdiction. 

Prescriptive jurisdiction

International law recognises various grounds on which extraterritorial prescriptive jurisdiction can be regarded as justified, some of them potentially very broad. These tend to reflect a broader principle that there must be sufficient connection between the person or conduct and the state asserting jurisdiction to justify the extraterritoriality in question. The more tenuous the connection and the greater the cross-border reach, the more exorbitant the claim to jurisdiction and the less likely that the extraterritoriality can be justified. 

That is the theory. In practice, the customary norms of international law tend to be distinctly malleable and, when push comes to shove, to merge into geopolitics.

Enforcement jurisdiction

In contrast, for exercise of investigative or enforcement jurisdiction, the traditional view is that nothing less than consent of the target state will do. Unlike for prescriptive jurisdiction, there is no balancing exercise to justify the degree of extraterritoriality of the asserted jurisdiction. The focus is entirely on the conduct of the state and whether it is an incursion on the territorial sovereignty of the target state.

However, this principle has come under strain. When electronic communication and the internet enable state authorities to act remotely without setting foot in another state’s territory or sending a physical document across the border, does that violate another state’s territorial sovereignty? Should, as for prescriptive jurisdiction, other factors come into play that could justify the state's conduct?

For one answer we can go back to 1648 and the Peace of Westphalia. This was the birth of the modern nation state, in which each state has exclusive sovereignty over its own territory. The corollary of that principle is an aversion to projection of state power into another state's territory: most obviously, sending troops across the border.

That, however, is not the only way of violating a state's sovereignty. Enforcement actions such as serving a court order or an arrest warrant within a foreign state's territory also project state power across the border and are considered to require the consent of the nation state concerned:

"Persons may not be arrested, a summons may not be served, police or tax investigations may not be mounted, and orders for production of documents may not be executed on the territory of another state, except under the terms of a treaty or other consent given." (Brownlie's Principles of Public International Law (9th edn) J. Crawford, Oxford, 2019. p.462)

That is why there is a proliferation of international treaties dealing with issues such as cross-border service of legal proceedings, assistance from overseas authorities in obtaining evidence for criminal prosecutions (MLAT) or, more recently, enabling direct service of information requests on foreign telecommunications operators. 

Enforcement jurisdiction and regulators

A requirement for consent of the target state creates a potential problem for regulators, whose procedures are highly bureaucratic: inevitably so since considerations of due process and fundamental rights will require them to give enforcement targets full and fair notice of their proposed and actual decisions. They are also often given powers to serve mandatory demands for information, backed up by sanctions (sometimes criminal offences, sometimes civil penalties).

On what basis can a regulator send such official documents across borders without impinging on the sovereignty of the target state? The answer is not immediately obvious, especially since the activities of regulators do not necessarily fall into simple categories of civil or criminal upon which international treaties regarding service of legal documents tend to be founded.

A typical solution to the territorial sovereignty problem is to enlist the assistance of the relevant authorities in the target state. If such assistance is not covered by a multinational or bilateral treaty, a regulator might come to an arrangement such as a memorandum of understanding between agencies in a group of states.

The 2020 multilateral Competition Authorities Mutual Assistance Framework model agreement, for instance, envisages that requests for voluntary provision of information could be made by a direct approach to persons in another territory. For mandatory process the route is via the authorities in the other country.

However, courts have sometimes held that serving a cross-border notice is not like trespassing on the territory of the receiving state. In the UK the Court of Appeal in Jimenez considered an HMRC taxpayer information notice (with the potential sanction of civil, but not criminal, financial penalties) served by post on someone in Dubai, in order to check his UK tax position. Jimenez argued that sending the notice was contrary to international law, as it would:

“offend state sovereignty by violating the principle that a state must not enforce its laws on the territory of another state without that other state’s consent.”

Leggatt LJ (as he then was) said:

“I do not accept that sending a notice by post to a person in a foreign state requiring him to produce information that is reasonably required for the purpose of checking his tax position in the UK violates the principle of state sovereignty. Such a measure does not involve the performance of any official act within the territory of another state – as would, for example, sending an officer of Revenue and Customs to enter the person’s business premises in a foreign state and inspect business documents that are on the premises…”.

The Jimenez decision postdated the current edition of Brownlie quoted above. In KBR the UK Supreme Court emphasised that Jimenez concerned civil, not criminal, penalties. 

All this is not to say that a domestic statute can never expressly grant powers to take steps that, as a matter of enforcement jurisdiction, could go further than envisaged by international law. A UK statute may indeed do that, but in the expectation that the powers will be used with restraint, in a way that does not offend the sensibilities of another nation state (a.k.a. comity).

This passage from the Court of Appeal judgment in Competition and Markets Authority v Volkswagen and BMW, a case upholding CMA information notices served on German companies, is illuminating:

“All competition authorities worldwide face the same conundrum. Their statutory duty is to preserve the integrity of their domestic markets and protect consumers; yet, to perform that task regulators frequently have to focus their fire power upon actors located abroad where, if they seek enforcement, they might confront a variety of legal and practical problems. … How do legislatures square the circle? They achieve this by conferring broad extraterritorial regulatory and investigatory powers which can be exercised in undiluted form within their territorial jurisdictions, but which are exercised with circumspection and pragmatism when dealing with undertakings physically located elsewhere.”

The judgment went on to discuss comity:

“The creation of a power to be exercised with comity in mind … is, in our judgment, an eminently apt device to enable regulators to address, flexibly, issues of comity if and when they arise. [Counsel] for the CMA explained how comity worked. She acknowledged candidly that, notwithstanding the existence of broad investigatory powers, it was ‘out of the question’ that the CMA would for instance ever seek to conduct an on the spot investigation (a dawn raid) at the premises of an undertaking physically located outside the jurisdiction. Equally, she did not shirk from acknowledging that there could be difficulties in the exercise of mandatory powers of enforcement or sanction against a foreign undertaking which failed to comply with a statutory request for information. Such practical difficulties were simply the stuff of a regulator’s life.” 

Thus from a comity perspective a national regulator seeking to take enforcement steps against a foreign person may decide to tread carefully, especially where the subject matter may touch on particular sensitivities of the target state, even if the domestic legislation gives it power to act across borders. A combination of ambitiously extraterritorial prescriptive jurisdiction and broad investigatory and enforcement powers has the potential to become a combustible mixture.

Online Safety Act – prescriptive jurisdiction

With that background out of the way, how do Under-Secretary Rogers’ comments stack up?

In terms of its overall ambit, the UK Online Safety Act is extraterritorial but does not go as far as the mere accessibility position taken by Australian online safety legislation. The Australian Online Safety Act 2021 baldly asserts that a social media service is in scope of the Act unless “none of the material on the service is accessible to, or delivered to, one or more end-users in Australia”.

That legislation gave rise to civil litigation for an injunction brought by the eSafety Commissioner in the Australian courts, arguing that X should be required to take down certain videos worldwide and that geofencing to exclude Australia was insufficient. The regulator lost. 

The UK OSA sets out three grounds on which a service can be regarded as ‘UK-linked’ and so be a regulated service within scope of the Act. In the US litigation brought by 4Chan, Ofcom relies on two of those grounds: first, a significant number of UK users (based on statistics gleaned from 4Chan's website), and second the UK as a target market of the site (based on 4Chan seeking advertisers by reference to the percentage of UK users stated on its website).

Ofcom's position is thus that a sufficient UK connection (as stipulated in the OSA) exists in order for the OSA to apply to 4Chan. Ofcom did not (and could not) assert that the OSA safety duties apply to a site regardless of whether it has any UK connection.

How, then, should we interpret Under-Secretary Rogers' comment: “…when British regulators decree that British law applies to American speech on American sites on American soil with no connection to Britain”? That must presumably reflect a view of what should constitute a UK connection that is at odds with the OSA's criteria, or perhaps with Ofcom's interpretation of those criteria. It cannot, however, be argued that the OSA contains no UK connection criteria at all.

As to compliance with international law, the UK government would no doubt argue that as a matter of prescriptive jurisdiction the criteria for UK links stipulated in the OSA provide a sufficiently close connection with the UK to justify bringing a foreign service provider into scope, and are not exorbitant. It would also no doubt point to the fact that the substantive measures that can be required of an in-scope provider apply only to UK users of the service.

The Online Safety Act UK links criteria

The UK links set out in the OSA vary in the closeness of the stipulated UK connection. Some of them could be regarded as overreaching.  

The first ground - a 'significant' number of UK users - suffers from the vagueness of the term 'significant'. The Act does not elaborate on what that might mean and Ofcom has avoided specifics in its published guidance.

Speaking for myself, I have long proposed that extraterritorial jurisdiction on the internet should depend on whether a foreign site has engaged in positive conduct towards the jurisdiction. On that basis a self-contained test based only on number or proportion of users is potentially problematic: users may come to a site in numbers without the site operator ever having engaged in any positive conduct towards the country in which they are located. (This criticism can be applied to the DSA as well as the OSA).

The second ground - the UK as a target market - is reasonably conventional, if interpreted so as to reflect a requirement for positive conduct. Directing and targeting of activities has long been thought to be an appropriate ground on which to assert jurisdiction over internet actors.

The third ground - 'material risk of significant [physical or psychological] harm " is the most far-reaching and comes closest to a ‘mere accessibility’ test.

So far as prescriptive jurisdiction is concerned then, the OSA does stretch the limits of extraterritoriality, but – unless one were to take the position that a site is connected only to the country of its location - does not purport to apply to sites regardless of whether they have any connection to the UK.

Online Safety Act – enforcement jurisdiction

Although Under-Secretary Rogers’ comments are framed in terms of the OSA’s prescriptive jurisdiction, the point that 4Chan has emphasised in its US litigation concerns Ofcom’s exercise of its enforcement jurisdiction – serving a series of documents, including a mandatory information request under Section 100 of the OSA, directly on 4Chan by email.

As already noted, a regulator such as Ofcom proceeds against a service provider by way of a series of official notices. These present no jurisdictional problems if they can be served within the UK, but can Ofcom serve a notice on a foreign operator without violating the territorial sovereignty of its host country? As a matter of UK domestic law the OSA provides a variety of methods of service, including cross-border service by post and service by email.

Some might suggest that that, as a matter of international law, is an impermissible exercise of enforcement or investigatory jurisdiction unless done with the consent of the USA. But as Jimenez illustrates, a UK court would not necessarily agree; and in any case, if the words of the domestic statute are sufficiently clear to rebut any interpretative presumption against extraterritoriality, a UK court will give effect to them. 

Consequences

The consequences of exceeding acceptable limits of extraterritoriality may vary widely, depending on how exorbitant is the exercise of jurisdiction and how sensitive is the subject matter. They range from no response (often the case for merely prescriptive jurisdiction), to diplomatic, to refusal by foreign courts to recognise or enforce, to enactment of various kinds of blocking legislation.

One example of the latter was the US SPEECH Act 2010, which prevents enforcement of certain foreign libel judgments in the US courts and enables US persons to start proceedings for a declaration of non-enforceability in the US courts.

Another was the UK Protection of Trading Interests Act 1980. This was a response to a long period of US anti-trust legislation being enforced against conduct outside the USA by non-US companies. Years of diplomatic activity had failed to resolve the conflictwhich had become more acute in the late 1970s.

As already mentioned, a state-level GRANITE Act has been introduced as a Bill into the Wyoming legislature. It is both a shield and a sword, albeit that the sword would be dependent on a federal amendment to the Foreign Sovereign Immunities Act. We await to see if a federal GRANITE Act will materialise. 

The Court of Appeal in the BMW case noted that it had been said that antitrust law was the best illustration of the problem of national public interest risking conflicting with issues of international sovereignty. Speech on the internet – today, online safety in particular - is bidding fair to seize that mantle.


Saturday, 13 December 2025

Repeal, reform, rewrite?

A Parliamentary petition calling for repeal of the Online Safety Act has reached over 550,000 signatures and is due to be debated on 15 December.

The demand for abolition is eye-catching, but inevitably lacks nuance. In any case it is tolerably clear that the petition is aimed not at the entire Act, but at its core: the set of regulatory safety duties imposed on platforms and search engines. Even for those elements, the petition envisages not bare repeal but repeal and replacement:

“We believe that the scope of the Online Safety act is far broader and restrictive than is necessary in a free society.

For instance, the definitions in Part 2 covers online hobby forums, which we think do not have the resource to comply with the act and so are shutting down instead.

We think that Parliament should repeal the act and work towards producing proportionate legislation rather than risking clamping down on civil society talking about trains, football, video games or even hamsters because it can't deal with individual bad faith actors.”

Of course the Act contains much more than the safety duties: in particular, new communications and other criminal offences, age assurance requirements on non-user-to-user pornography websites in Part 5, and separate duties around fraudulent paid-for advertising.

Communications offences

Most, if not all, the new criminal offences either improve on what went before or are eminently justifiable. Is anyone suggesting, for instance, that deliberate epilepsy trolling or cyberflashing should not be an offence?

Possibly the only offence about which there could be some debate is the S.179 false communications offence. It has, rightly or wrongly, become something of a lightning rod for critics concerned about over-broadly criminalising disinformation.

Whatever doubts may exist about its current formulation, the S.179 offence is still an improvement on what went before. Its antecedents can be traced back to 1935, when distressing hoax telegrams were a concern:

“there are a number of cases where people … will send a telegram to a person saying that somebody is seriously ill and the recipient is put to great anxiety, sometimes even to expense, and then finds it is a bogus message. … That silly practice which causes anxiety - in some cases a telegram has been sent declaring a person to be dead when there was no foundation for the statement - ought to be stopped.” (Postmaster-General, Hansard, 5 March 1935)

The 1935 offence covered telegrams and telephone calls. In 1969 it was broadened to include messages sent by means of a public telecommunications service. In 2003 it was amended to cover communications sent across a public electronic communications network. By that time internet communications were undoubtedly in scope. Following a Law Commission recommendation, S.179 OSA replaced the 2003 offence.

The 1935 (and 1969 and 2003 updated) offence was framed in terms of causing “annoyance, inconvenience, or needless anxiety”. S.179’s “non-trivial physical or psychological harm”, whatever the criticism that it has attracted as a definition, is narrower.

As to the application of S.179 to fake news, the Law Commission said:

“It is important to note at the outset that the proposals were not intended to combat what some consultees described in their responses as “fake news”. While some instances of deliberate sharing of false information may fall within the scope of the offences, our view is that the criminal law is not the appropriate mechanism to regulate false content online more broadly.” (Modernising Communications Offences, Final Report, 20 July 2021, para 3.3)

When the new offences came into force in January 2024 the government announced that the false information offence criminalised “sending fake news that aims to cause non-trivial physical or psychological harm.”

Notwithstanding the improvement on its predecessors, S.179 shares some features in common with the harmful communications offence (also recommended by the Law Commission) which was rightly dropped from the Bill. As a result of that decision the long-criticised S.127(1) Communications Act 2003 and S.1(a)(i) Malicious Communications Act 1988 “grossly offensive” offences remain in force. A re-examination of how to replace those is overdue. In conjunction, another look could be taken at the false information offence.

The core safety duties

Returning to the core safety duties, do they merit repeal? The underlying point of principle is that the Act’s safety duties rest on defectively designed foundations: a flawed analogy with the duty of care owed to visitors by the occupier of real world premises. That, it might be said, requires exposing the root cause, not compilation of a snagging list.

Why is the analogy flawed? The Act’s safety duties are about illegal (criminal) user content and activities, and user content harmful to children. Occupiers’ liability is about risk of causing physical injury. Those are categorically different. You don’t need 444 pages of Ofcom Illegal Content Judgements Guidance to decide that a projecting nail in a floorboard poses a risk of physical injury. Moreover the remedy – hammering down the nail — hurts no-one.

Content judgements about illegality, in particular, tend to be nuanced, complex and will often depend on contextual information that is unavailable to the platform. Hammering down nails at scale inevitably results in arbitrary judgements and suppression of legitimate user content. That is magnified when the Act contemplates automated content detection and removal in order to fulfil a platform’s duties.

In short, speech is not a tripping hazard and it was folly to design legislation as if it were.

Nor, it might be suggested, should anyone be thinking of adding more layers to this already teetering wedding cake. It is a confection with which no-one – supportive or critical – is currently very happy, albeit for widely differing reasons. If vaguer and more subjective notions of harmful content are piled onto the legislation, the structural cracks can only expand.

So, where may critics’ attentions be focused? The Open Rights Group, both itself and jointly with other civil society organisations, has produced detailed briefing papers ahead of the Parliamentary debate. 

Here, using U2U services by way of illustration, are some thoughts of my own on possible areas of interest.

S.10(2)(a) Proactive content filtering

Section 10(2)(a) imposes a duty on platforms to take or use proportionate measures relating to the design or operation of the service to “prevent individuals from encountering priority illegal content by means of the service”. Priority illegal content is a list of over 140 criminal offences, plus their inchoate versions: encouraging and assisting, conspiracy and so on.

The EU Digital Services Act adopts a diametrically opposed stance, retaining the long-standing EU prohibition on imposing general monitoring obligations on platforms.

A platform that complies with recommendations made in an Ofcom Code of Practice is deemed to comply with the duty laid down in the Act. Ofcom started, in its original Illegal Content Code of Practice, by recommending CSAM perceptual hash-matching and URL-matching for some platforms.

It is now going further in its Additional Safety Measures consultation, proposing to apply perceptual hash-matching to terrorism and intimate image abuse content, and proposing (among other things) ‘principles-based’ proactive technology measures for a broader range of illegal content. Unlike the previous CSAM hash-matching recommendation, IIA perceptual hash-matching could be carried out against an unverified database of hashes.

However, once extended beyond matching against a list of verified illegal items, the assumption underpinning content detection and upload filtering – that technology can make accurate content judgements – becomes questionable. Automated illegality judgements made in the absence of full contextual information are inherently arbitrary. Measures that result in too many false positives cannot be proportionate.

Ofcom’s proposed ‘principles-based’ measures are its most controversial recommendations. Ofcom has declined to specify what is an acceptable level of false positives, leaving that judgement to service providers. As Ofcom’s Director for Online Safety Strategy Delivery Mark Bunting said in evidence to the Lords Communications and Digital Committee:

“We think it is right that firms have to take responsibility for making the judgment themselves about whether the tool is sufficiently accurate and effective for their use. That is their judgment, and they should use it; we are expecting that to drive wider use than the regulator just issuing directions.” (14 October 2025)

Ofcom itself acknowledges that its principles-based proposals:

“could lead to significant variation in impact on users’ freedom of expression between services”. (Consultation, para 9.136)

That conflicts with the reasonable degree of certainty for users necessary for compliance with the ECHR.

Section 10(2)(a) is likely to be high up the critics’ list: too general, flawed in its underlying assumptions, and (as is now evident), a means for Ofcom to propose principles-based proactive technology measures that are so vague as to be non-compliant with basic rule of law and ECHR requirements.

S.192 Illegality judgements

A related issue, but relevant also to reactive content moderation, is the standard that the Act sets for illegality judgements. S. 192 specifies how service providers should go about determining whether a given item of content is illegal (or, for that matter, whether it is content of another relevant kind such as content harmful to children).  The provision was introduced into the Bill by amendment, following the Independent Reviewer of Terrorism Legislation’s ‘Missing Pieces’ paper which rightly pointed out the Bill’s lack of clarity about how service providers were to go about adjudging illegality, especially the mental element of any offence.

Section 192, although it clarifies the illegality judgement thresholds, bakes in removal of legal content: most obviously in the stipulated threshold of “reasonable grounds to infer” illegality. It also requires the platform to ignore the possibility of a defence unless it has positive grounds to infer that a defence may succeed.

The S.192 threshold also sits uneasily with S.10(3)(b), which requires a proportionate process for swift removal when a platform is alerted by a person to the presence of any illegal content, or “becomes aware” of it in any other way.

S.121 – Accredited technology notices

This is perhaps the most overtly controversial Ofcom power in the Act. For CSAM, the S.121 power can – unlike the preventive content duty under S.10(2) – require accredited proactive detection technology to be applied to private communications. Critics have questioned whether the power could be used to undermine, circumvent or render it impossible for a service provider to use end to end encryption.

As with proactive technology measures, in its consultation on minimum standards of accuracy of accredited technology for S.121 purposes Ofcom has declined to specify quantitative limits on what might be an acceptable level of accuracy of the technology.

Ofcom’s relationship with government.

The extent to which government is in a position to put pressure on Ofcom, in principle an independent regulator, has come into clearer focus since Royal Assent. The most notable example is the Secretary of State’s ‘deep disappointment’ 12 November 2025 letter to Ofcom.

The Act also contains specific mechanisms giving the Executive a degree of control, or at least influence, over Ofcom.

S.44 Powers of direction These powers enable the Secretary of State to direct OFCOM to modify a draft code of practice on various grounds, including national security, public safety, public health, or relations with a foreign government. The original provisions were criticised during the passage of the Bill and were narrowed as a result. So far they have not been used.

S.92 and S.172 Strategic priorities S.172 enables the Secretary of State to designate a statement setting out the government’s online safety strategic priorities. If it does so, OFCOM must have regard to the statement when carrying out its online safety functions. The government has already used this power.

Whatever Parliament may have thought was meant by ‘strategic’ priorities, or ‘particular outcomes identified with a view to achieving the strategic priorities’, the 30 page Statement designated by the Secretary of State on 2 July 2025 goes into what might be thought to be operational areas: for instance, interpreting ‘safe by design’ as contemplating the use of technology in content moderation.

S.175 Special circumstances directions The Secretary of State has power to give a direction to Ofcom, in exercising its media literacy functions, if the Secretary of State has reasonable grounds for believing that there is a threat to the health or safety of the public, or to national security. A direction could require Ofcom to give priority to specified objectives, or require Ofcom to give notice to service providers requiring them to make a public statement about steps they are taking in response to the threat. The grounding of this power in Ofcom’s media literacy functions renders its ambit somewhat opaque.

S.98 Ofcom’s risk register and sectoral risk profiles

S.98 mandates Ofcom to produce risk registers and risk profiles for different kinds of services, grouped as Ofcom thinks fit according to their characteristics and risk levels. ‘Characteristics’ includes functionalities, user base, business model, governance and other systems and processes.  ‘Risk’ means risk of physical or psychological harm presented by illegal content or activity, or by content harmful to children.

In preparing its work product Ofcom abandoned any notion that functionality has to create or exacerbate a risk of illegal content or offences. Instead, Ofcom’s risk register is based on correlation: evidence that malefactors have made use of functionality available on platforms and search engines.

That has led Ofcom to designate common or garden functionality, such as the ability to use hyperlinks, as risk factors. That, it might be thought, turns the right of freedom of expression on its head. We do not treat the ability to use pen and paper, a typewriter, or a printing press as an inherent risk.

The length, complexity and often impenetrability of Ofcom’s work product is also noteworthy. The Illegal Harms Register of Risks runs to 480 pages, accompanied by 84 pages of Risk Assessment Guidance and Profiles.

Harm, illegality, or both?

The Act’s safety duties vary as to whether they are trying to protect users from risk of encountering illegal content per se, risk of suffering harm (physical or psychological) as a result of encountering illegal content, or both. This may seem a rather technical point, but is symptomatic of the Act’s deeper confusion about what it is trying to achieve.

Ofcom’s efforts to simplify the duties by referring to ‘illegal harms’ in some of their documents added to the confusion. Nor were matters helped by the Act’s designation, as priority offences, of some offences for which the likelihood of physical or psychological harm would appear to be remote (consider money-laundering, for instance).

There is also a curious mismatch between what the Act requires for Ofcom’s Risk Registers and Risk Profiles, compared with risk assessments carried out by service providers. Ofcom’s work products are required only to consider risk of harm (physical or psychological) presented by illegal content, whereas service provider risk assessments are also required to consider illegality per se.

S.1 The purpose clause

Section 1 of the Act is unlikely to be in the sights of many critics. But, innocuous as it may appear, it deserves to be.

The purpose clause ostensibly sets out the overall purposes of the Act. It was the last minute product of a new-found spirit of cross-party collaboration that infused the House of Lords in the final days of the Bill. In reality, however, it illustrates the underlying lack of clarity about what the Act is trying to achieve.

Purpose clauses are of debatable benefit at the best of times: if they add nothing to the text of the Act, they are superfluous. If they differ from the text of the Act, they are prone to increase the difficulty of interpretation. This section uses terminology that appears nowhere else in the Act, and is caveated with ‘among other things’ and ‘in broad terms’.

Possibly the low point of Section 1 is the reference to the need for services to be ‘safe by design’. Neither ‘safe’, nor ‘safe by design’ are defined in the Act. They are susceptible of any number of interpretations.

One school of thought regards safety by design as being about giving thought at the design stage to safety features that are preferably content-agnostic and not focused on content moderation. The government, in its Statement of Strategic Priorities, takes a different view: safety by design is about preventing harm from occurring in the first place. That includes deploying technology to improve the scale and effectiveness of content moderation.

That interpretation readily translates into proactive content detection and filtering technology (see the discussion of section 10(2)(a) above). Indeed Ofcom, in its response to the government’s Statement of Strategic Priorities section on safety by design, refers to its own consultation on proactive technologies.

Fundamental rethink?

There are more problems with the Act: vague core definitions that even Ofcom will not give a view on, over-reaching territoriality, the inclusion of small, low risk volunteer-led forums, questions around age assurance and age-gating, concerns that the definitions of content legal but harmful to children are imprecise and may deny children access to beneficial content, and others.

The most radical option would be to rethink the broadcast-style ‘regulation by regulator’ model altogether. This observer has always viewed the adoption of that model as a fundamental error. Nothing that has occurred since has changed that view. If anything, it has been reinforced. Delay, expense and an inevitably bureaucratic approach were hard-wired into the legislation. The opportunity cost of the years and resources spent heading down that rabbit hole has to be immense.

The results are now attracting criticism from all sides: supporters, opponents and government alike. The mystery is why everyone concerned could not see what was designed in to the legislation from the start. If you put your faith in a discretionary regulator rather than in clear legal rules, prepare for disappointment when the regulator does not do what you fondly imagined that it would. If you wanted the regulator to be bold and ambitious, be prepared for the project to end up in the courts when the regulator overreaches.

Finally, the Online Safety Act project has been bedevilled throughout by a tendency to equate all platforms with large, algorithmically driven, social media companies. Even now, a Lords amendment recently tabled to the Children’s Wellbeing and Schools Bill, claiming to be about “introducing regulations to prevent under 16s from accessing social media”, is drafted so as to apply to all regulated user-to-user services as defined in the Online Safety Act – a vastly wider cohort of services.

That takes us back to the flawed analogy with occupier’s liability. Possibly the analogy was conceived with large social media companies in mind. But then, if the projecting nail in the floorboard is actually a social media company’s engagement algorithm, not the user’s speech itself, that would suggest legislation based on a completely different foundation: one that focuses on features and functionalities that create or exacerbate a risk of specific, tightly defined, objectively ascertainable kinds of injury.

Put another way, if you want to legislate about safety, make it about safety properly so called; if you want to legislate about Big Tech and the Evil Algorithm, make it about that; if you want to legislate about children, make it about that.  

What alternative approaches might there be? I don’t pretend to have complete answers, but suggested some in my response to the Online Harms White Paper back in 2019; and again in this post, written during the 2022 hiatus while the Conservatives sorted out their leadership crisis. 


Sunday, 16 November 2025

Data protection and the Online Safety Act revisited

The Information Commissioner’s Office has recently published its submission to Ofcom’s consultation on additional safety measures under the Online Safety Act.

The consultation is the second instalment of Ofcom’s iterative approach to writing Codes of Practice for user-to-user and search service providers. The first round culminated in Codes of Practice that came into force in March 2025 (illegal content) and July 2025 (protection of children). A service provider that implements the recommendations in an Ofcom Code of Practice is deemed to comply with the various safety duties imposed by the Act.

The recommendations that Ofcom proposes in this second instalment are split almost equally between content-related and non-content measures (see Annex for a tabular analysis). Content-related measures require the service provider to make judgements about items of user content. Non-content measures are not directly related to user content as such.

Thus the non-content measures mainly concern age assessment, certain livestreaming features and functionality that Ofcom considers should not be available to under-18s, and default settings for under-18s. Two more non-content measures concern a livestream user reporting mechanism and crisis response protocols.

The content-related measures divide into reactive (content moderation, user sanctions and appeals) and proactive (automated content detection in various contexts). Ofcom cannot recommend use of proactive technology in relation to user content communicated privately.

The applicability of each measure to a given service provider depends on various size, risk, functionality and other criteria set by Ofcom. 

Proactive content-related measures are especially controversial, since they involve platforms deploying technology to scan and analyse users’ content with a view to it being blocked, removed, deprioritised or affected in some other way. 

The ability of such technology to make accurate judgements is inevitably open to question, not only because of limitations of the technology itself but also because illegality often depends on off-platform contextual information that is not available to the technology. Inaccurate judgements result in false positives and, potentially, collateral damage to legitimate user content.  

The ICO submissions

What does the ICO have to say? Given the extensive territory covered by the Ofcom consultation, quite a lot: 32 pages of detailed commentary. Many, but not all, the comments concern the accuracy of various kinds of proactive content detection technology. 

As befits its regulatory remit, the ICO approaches Ofcom’s recommendations from the perspective of data protection: anything that involves processing of personal data. Content detection, judgements and consequent action are, from the ICO’s perspective, processes that engage the data protection accuracy principle and the overall fairness of processing.

Although the ICO does not comment on ECHR compliance, similar considerations will inform the compatibility of some of Ofcom’s content-related proactive technology recommendations with Article 10 ECHR (freedom of expression).

The ICO’s main comments include:

  • Asking Ofcom to clarify its evidence on the availability of accurate, effective and bias-free technologies for harms in scope of its "principles-based" proactive technology measures. Those harms are, for illegal content: image based CSAM, CSAM URLs, grooming, fraud and financial services, encouraging or assisting suicide (or attempted suicide); and for content harmful to children: pornographic, suicide, self-harm and eating disorder content. This is probably the most significant of the ICO's suggestions, in effect challenging Ofcom to provide stronger evidential support for its confidence that such technologies are available for all those kinds of harm.
  • For Ofcom’s principles-based measures, the ICO recommends that a provider, when assessing whether a given technology complies with Ofcom’s proactive technology criteria, should have to consider the “impact and consequences” of incorrect detections, including any sanctions that services may apply to users as a result of such detections. Those may differ for different kinds of harm.
  • Suggesting that Ofcom’s Illegal Content Codes of Practice should specify that services should have “particular consideration regarding the use of an unverified hash database” (as would be permissible under Ofcom’s proposed measure) for Intimate Image Abuse (IIA) content.

Before delving into these specific points, some of the ICO’s more general observations on Ofcom’s consultation are noteworthy.

Data protection versus privacy

The ICO gently admonishes Ofcom for conflating Art 8 ECHR privacy protections (involving consideration of whether there is a reasonable expectation of privacy) with data protection.

For example, section 9.158 of the privacy and data protection rights assessment suggests that the degree of interference with data protection rights will depend on whether the content affected by the measures is communicated publicly or privately. This is not accurate under data protection law; irrespective of users’ expectations concerning their content (and/or associated metadata), data protection law applies where services are processing personal data in proactive technology systems. Services must ensure that they comply with their data protection obligations and uphold users’ data protection rights, regardless of whether communications are deemed to be public or private under the OSA.  

The ICO suggests that it may be helpful for Art 8 and data protection to be considered separately.

Data protection, automated and human moderation

The ICO “broadly supports” Ofcom’s proposed measures for perceptual hash-matching for IAA and terrorism content (discussed further below).  However, in this context it again takes issue with Ofcom’s conflation of data protection and privacy. This time the ICO goes further, disagreeing outright with Ofcom’s characterisations:

For example, the privacy and data protection rights assessments for both the IIA and terrorism hash matching measures state that where services carry out automated processing in accordance with data protection law, that processing should have a minimal impact on users’ privacy. Ofcom also suggests that review of content by human moderators has a more significant privacy impact than the automated hash matching process. We disagree with these statements. Compliance with data protection law does not, in itself, guarantee that the privacy impact on users will be minimal. Automation carries inherent risks to the rights and freedoms of individuals, particularly when the processing is conducted at scale.

The ICO’s disagreement with Ofcom’s assessment of the privacy impact of automated processing harks back to the ICO’s comments on Ofcom’s original Illegal Harms consultation last year. Ofcom had said:

Insofar as services use automated processing in content moderation, we consider that any interference with users’ rights to privacy under Article 8 ECHR would be slight.

The ICO observed in its submission to that consultation:

From a data protection perspective, we do not agree that the potential privacy impact of automated scanning is slight. Whilst it is true that automation may be a useful privacy safeguard, the moderation of content using automated means will still have data protection implications for service users whose content is being scanned. Automation itself carries risks to the rights and freedoms of individuals, which can be exacerbated when the processing is carried out at scale.

Hash-matching, data protection, privacy and freedom of expression

In relation to hash-matching the ICO stresses (as it does also in relation to Ofcom’s proposed principles-based measures, discussed below) that accuracy of content judgements impacts not only freedom of expression, but privacy and data protection:

For example, accuracy of detections and the risk of false positives made by hash matching tools are key data privacy considerations in relation to these measures. Accuracy of detections has been considered in Ofcom’s freedom of expression rights assessment, but has not been discussed as a privacy and data protection impact. The accuracy principle under data protection law requires that personal information must be accurate, up-to-date, and rectified where necessary. Hash matching tools may impact users’ privacy where they use or generate inaccurate personal information, which can also lead to unfair consequences for users where content is incorrectly actioned or sanctions incorrectly applied.

Principles-based proactive technology measures - evidence of available technology

The Ofcom consultation proposes what it calls "principles-based" measures (ICU C11, ICU C12, PCU C9, PCU C10), requiring certain U2U platforms to assess available proactive technology and to deploy it if it meets proactive technology criteria defined by Ofcom. 

These would apply to certain kinds of “target” illegal content and content harmful to children. Those are, for illegal content: image based CSAM, CSAM URLs, grooming, fraud (and financial services), and encouraging or assisting suicide (or attempted suicide); and for content harmful to children: pornographic, suicide, self-harm and eating disorder content. 

Ofcom says that it has a higher degree of confidence that proactive technologies that are accurate, effective and free from bias are likely to be available for addressing those harms. Annex 13 of the consultation devotes 7 pages to Ofcom's evidence supporting that.

The ICO says that it is not opposed to Ofcom’s proposed proactive technology measures in principle. But as currently drafted the measures “present a number of questions concerning alignment with data protection legislation”, which the ICO describes as “important points”.

In the ICO’s view there is a “lack of clarity” in the consultation documents about the availability of proactive technology that meets Ofcom's proactive criteria for all harms in scope of the measures. The ICO suggests that this could affect how platforms go about assessing the availability of suitable technology:

…we are concerned that the uncertainty about the effectiveness of proactive technologies currently available could lead to confusion for organisations seeking to comply with this measure, and create the risk that some services will deploy technologies that are not effective or accurate in detecting the target harms.

It goes on to comment on the evidence set out in Annex 13:

Annex 13 outlines some evidence on the effective deployment of existing technologies, but this is not comprehensively laid out for all the harms in scope. We consider that a more robust overview of Ofcom’s evidence of the tools available and their effectiveness would help clarify the basis on which Ofcom has determined that it has a higher degree of confidence about the availability of technologies that meet its criteria. This will help to minimise the risk of services deploying proactive technologies that are incompatible with the requirements of data protection law.

The ICO approaches this only as a matter of compliance with data protection law. Its comments do, however, bear tangentially on the argument that Ofcom’s principles-based proactive technology recommendations, lacking quantitative accuracy and effectiveness criteria, are too vague to comply with Art 10 ECHR.

Ofcom has to date refrained from proposing concrete thresholds for false positives, both in this consultation and in a previous consultation on technology notices under S.121 of the Act. If Ofcom were to accede to the ICO’s suggestion that it should clarify the evidential basis of its higher degree of confidence in the likely availability of accurate and effective technology for harms in scope, might that lead it to grasp the nettle of quantifying acceptable limits of accuracy and effectiveness?

Principles-based proactive technology criteria – variable impact on users

Ofcom’s principles-based measures do set out criteria that proactive technology would have to meet. However, the proactive technology criteria are framed as qualitative factors to be taken into account, not as threshold conditions.

The ICO does not go so far as to challenge the absence of threshold conditions. It supports “the inclusion of these additional factors that services should take into account” and considers that “these play an important role in supporting the accuracy and fairness of the data processing involved.”

However, it notes that:

…the factors don’t recommend that services consider the distinction between the different types of impacts on users that may occur as a result of content being detected as target content.

It considers that:

… where personal data processing results in more severe outcomes for users, it is likely that more human review and more careful calibration of precision and recall to minimise false positives would be necessary to ensure the underpinning processing of personal data is fair.

The ICO therefore proposes that Ofcom should add a further factor, recommending that service providers:

…also consider the impact and consequences of incorrect detections made by proactive technologies, including any sanctions that services may apply to users as a result of such detections. …This will help ensure the decisions made about users, using their personal data, are more likely to be fair under data protection law.

This is all against the background that:

Where proactive technologies are not accurate or effective in detecting the harms in scope of the measures, there is a risk of content being incorrectly classified as target illegal content or target content harmful to children. Such false positive outcomes could have a significant impact on individuals’ data protection rights and lead to significant data protection harms. For example, false positives could lead to users wrongly having their content removed, their accounts banned or suspended or, in the case of detection of CSEA content, users being reported to the National Crime Agency or other organisations.

That identifies the perennial problem with proactive technology measures. However, while the ICO proposal would add contextual nuance to service providers’ multi-factorial assessment of risk of false positives, it does not answer the fundamental question of how many false positives is too many. That would remain for service providers to decide, with the likelihood of widely differing answers from one service provider to the next. Data protection law aside, the question would remain of whether Ofcom’s proposed measures comply with the "prescribed by law" requirement of the ECHR.

Perceptual hash-matching - sourcing image-based IIA hashes

Ofcom’s recommendations include perceptual hash matching against databases of hashes, for intimate image abuse and terrorist content.

Ofcom proposes that for IIA content hash-matching could be carried out against an unverified database of hashes. That is in contrast with its recommendations for CSAM and terrorism content hash-matching. The ICO observes:

Indeed Ofcom notes that the only currently available third-party database of  IIA hashes does not verify the content; instead, content is self-submitted by victims and survivors of IIA.  

Ofcom acknowledges that third party databases may contain some images that are not IIA, resulting in content being erroneously identified as IIA.

Ofcom said in the consultation:

We are not aware of any evidence of unverified hash databases being used maliciously with the aim of targeting content online for moderation. While we understand the risk, we are not aware that it has materialised on services which use hash matching to tackle intimate image abuse.

Under Ofcom's proposals the service provider would be expected to treat a positive match by perceptual hash-matching technology as “reason to suspect” that the content may be intimate image abuse. It would then be expected to subject an “appropriate proportion” of detected content to human review.

According to Annex 14 of the consultation, among the factors that service providers should consider when deciding what proportion of content to review would be:

The principle that content with a higher likelihood of being a false positive should be prioritised for review, with particular consideration regarding the use of an unverified hash database.

The ICO notes that having “particular consideration regarding use of an unverified hash database” does not appear in the proposed Code of Practice measures themselves. It observes:

Having regard to the use of unverified databases is an important privacy and data protection safeguard. It is our view that due to the increased risk of false positive detections where services use unverified hash databases, services may need to review a higher proportion of the content detect [sic] by IIA hash matching tools in order to meet the fairness and accuracy principles of data protection law.

The ICO recommends that the factor should be added to the Code of Practice. 

Other ICO recommendations

Other ICO recommendations highlighted in its Executive Summary include:

  • Suggesting that additional safeguards should be outlined in the Illegal Content Judgements Guidance where, as Ofcom proposes, illegal content judgements might be made about CSAM content that is not technically feasible to review (for instance on the basis of group names, icons or bios). The ICO also suggests that Ofcom should clarify which users involved in messaging, group chats or forums would be classed as having shared CSAM when a judgement is made on the basis of a group-level indicator.
  • As regards sanctions against users banned for CSEA content, noting that methods to prevent such users returning to the service may engage the storage and access technology provisions of the Privacy and Electronic Communication Regulations (PECR); and suggesting that for the purposes of appeals Ofcom should clarify whether content determined to be lawful nudity content should still be classified as ‘CSEA content proxy’ (i.e. prohibited by terms of service), since this would affect whether services could fully reverse a ban.
  • Noting that implementation of tools to prevent capture and recording of livestreams, in accordance with Ofcom’s recommended measure, may also engage the storage and access technology provisions of PECR.
  • Supporting Ofcom’s proposals to codify the definition of highly effective age assurance (HEAA) in its Codes of Practice; and emphasising that implementation of HEAA must respect privacy and comply with data protection law.

Most of the ICO comments that are not included in its Executive Summary consist of various observations on the impact of, and need to comply with, data protection law.

Annex – Ofcom’s proposed additional safety measures

Recommendation

Reference

Categorisation

Livestreaming

 

 

User reporting mechanism that a livestream contains content that depicts the risk of imminent physical harm.

ICU D17

Non-content

Ensure that human moderators are available whenever users can livestream

ICU C16

Reactive content-related

Ensure that users cannot, in relation to a one-to-many livestream by a child (identified by highly effective age assurance) in the UK:

a) Comment on the content of the livestream;

b) Gift to the user broadcasting the livestream;

c) React to the livestream;

d) Use the service to screen capture or record the livestream;

e) Where technically feasible, use other tools outside of the service to screen capture or record the livestream.

ICU F3

Non-content

Proactive technology

 

 

Assess whether proactive technology to detect or support the detection of target illegal content is available, is technically feasible to deploy on their service, and meets the proactive technology criteria. If so, they should deploy it.

ICU C11

Proactive content-related

Assess existing proactive technology that they are using to detect or support the detection of target illegal content against the proactive technology criteria and, if necessary, take steps to ensure the criteria are met.

ICU C12

Proactive content-related

As ICU C11, but for target content harmful to children.

PCU C9

Proactive content-related

As ICU C12, but for target content harmful to children.

PCU C10

Proactive content-related

Intimate image abuse (IIA) hash matching

 

 

Use perceptual hash matching to detect image based intimate image abuse content so it can be removed.

ICU C14

Proactive content-related

Terrorism hash matching

 

 

Use perceptual hash matching to detect terrorism content so that it can be removed.

ICU C13

Proactive content-related

CSAM Hash matching (extended to more service providers)

 

 

Ensure that hash-matching technology is used to detect and remove child sexual abuse material (CSAM).

ICU C9

Proactive content-related

Recommender systems

 

 

Design and operate recommender systems to ensure that content indicated potentially to be certain kinds of priority illegal content is excluded from users’ recommender feeds, pending further review.

ICU E2

Proactive content-related

User sanctions

 

 

Prepare and apply a sanctions policy in respect of

UK users who generate, upload, or share illegal content and/or illegal content proxy, with the objective of preventing future dissemination of illegal content.

ICU H2

Reactive content-related

As ICU H2, but for content harmful to children

and/or harmful content proxy.

PCU H2

Reactive content-related

Set and record performance targets for content moderation function covering the time period for taking relevant content moderation action.

ICU C4, PCU C4

Reactive content-related

CSEA user banning

 

 

Ban users who share, generate, or upload CSEA, and those who receive CSAM, and take steps to prevent their return to the service for the duration of the ban.

ICU H3

Reactive content-related

Highly effective age assurance

 

 

Definitions of highly effective age assurance; principles that providers should have regard to when implementing an age assurance process.

ICU B1, PCU B1

Non-content

Appeals of highly effective age assurance decisions.

ICU D15, ICU D16

Non-content

Increasing effectiveness for U2U settings, functionalities, and user support

 

 

Safety defaults and support for child users

ICU F1 & F2

Non-content

Crisis response

 

 

Prepare and apply an internal crisis response protocol. Conduct and record a post-crisis analysis. Dedicated law enforcement crisis communication channel.

ICU C15 / PCU C11

Non-content

Appeals

 

 

Appeals to cover decisions taken on the basis that content was an ‘illegal content proxy’.

ICU D

Reactive content-related

Appeals to cover decisions taken on the basis that content was a ‘content harmful to children proxy’.

PCU D

Reactive content-related


[Amended 'high' degree of confidence to 'higher' in two places. 17 Nov 2025.]