Showing posts with label Online Harms. Show all posts
Showing posts with label Online Harms. Show all posts

Monday, 4 August 2025

Ofcom’s proactive technology measures: principles-based or vague?

Ofcom has published its long-expected consultation on additional measures that it recommends U2U platforms and search engines should implement to fulfil their duties under the Online Safety Act.  The focus, this time, is almost entirely on proactive technology: automated systems intended to detect particular kinds of illegal content and content harmful to children, with a view to blocking or swiftly removing them.

The consultation marks a further step along the UK’s diverging path from the EU Digital Services Act. The DSA prohibits the imposition of general monitoring obligations on platforms. Those are just the kind of obligations envisaged by the Online Safety Act’s preventative duties, which Ofcom is gradually fleshing out and implementing.

Ofcom finalised its first Illegal Harms Code of Practice in December 2024. For U2U services the Code contained two proactive technology recommendations: hash and URL matching for CSAM. The initial consultation had also suggested fuzzy keyword matching to detect some kinds of fraud, but Ofcom did not proceed with that. The regulator indicated that it would revisit fraud detection in a later, broader consultation. That has now arrived.

The new U2U proposals go beyond fraud. They propose perceptual hash-matching for visual terrorism content and for intimate image abuse content. They suggest that content should be excluded from recommender feeds if there are indications that it is potentially illegal, unless and until it is determined via content moderation to be legal. 

Most ambitiously, Ofcom wants certain relatively large platforms to research the availability and suitability (in accordance with proposed criteria) of proactive technology for detection of fraud and some other illegal behaviour, then implement it if appropriate. Those platforms would also have to review existing technologies that they use for these purposes and, if feasible, bring them into line with Ofcom’s criteria.

Ofcom calls this a ‘principles-based’ measure, probably because it describes a qualitative evaluation and configuration process rather than prescribing any concrete parameters within which the technology should operate.

Freedom of expression

Legal obligations for proactive content detection, blocking and removal engage the fundamental freedom of expression rights of users. Obligations must therefore comply with ECHR human rights law, including requirements of clarity and certainty.

Whilst a principles-based regime may be permissible, it must nevertheless be capable of predictable application. Otherwise it will stray into impermissible vagueness. Lord Sumption in Catt said that what is required is a regime the application of which is:

“reasonably predictable, if necessary with the assistance of expert advice. But except perhaps in the simplest cases, this does not mean that the law has to codify the answers to every possible issue which may arise. It is enough that it lays down principles which are capable of being predictably applied to any situation."

In Re Gallagher he said that:

“A measure is not “in accordance with the law” if it purports to authorise an exercise of power unconstrained by law. The measure must not therefore confer a discretion so broad that its scope is in practice dependent on the will of those who apply it, rather than on the law itself. Nor should it be couched in terms so vague or so general as to produce substantially the same effect in practice.”

Typically these strictures would apply to powers and duties of public officials. The Online Safety Act is different: it requires U2U service providers to make content decisions and act (or not) to block or remove users’ posts. Thus the legal regime that requires them to do that has to provide sufficient predictability of their potential decisions and resulting acts.

In addition to fraud and financial services offences, Ofcom’s proposed principles-based measures would apply to image based CSAM, CSAM URLs, grooming, and encouraging or assisting suicide (or attempted suicide).

Any real-time automated content moderation measure poses questions about human rights compatibility. The auguries are not promising: proactive technology, armed only with the user’s post and perhaps some other on-platform data, will always lack contextual information. For many offences off-platform information can be the difference between guilt and innocence.  Decisions based on insufficient information inevitably stray into arbitrariness.

Then there is the trade-off between precision and recall. Typically, the more target content the automated tool is tuned to catch, the more false positives it will also throw up. False positives result in collateral damage to legitimate speech. It does not take many false positives to constitute disproportionate interference with users’ rights of freedom of expression.

Lord Grade, the Chairman of Ofcom, said in a recent speech that the aims of tackling criminal material and content that poses serious risks of harm to children’s physical or emotional health were not in conflict with freedom of expression. Indeed so, but focusing only on the aim misses the point: however worthy the end, it is the means - in this case proactive technology - that matters.

Prescribed by law

Ofcom’s Proactive Technology Draft Guidance says this about proportionality of the proposed measures:

“Proactive technology used for detection of harmful content involves making trade-offs between false positives and false negatives. Understanding and managing those trade-offs is essential to ensure the proactive technology performs proportionately, balancing the risk of over-removal of legitimate content with failure to effectively detect harm.” (para 5.14)

Proportionality is a requirement of human rights compliance. However, before considering proportionality a threshold step has to be surmounted: the ‘prescribed by law’ or ‘legality’ condition. This is a safeguard against arbitrary restrictions - laws should be sufficiently precise and certain that they have the quality of law.

The prescribed by law requirement is an aspect of the European Convention on Human Rights. It has also been said to be a UK constitutional principle that underpins the rule of law:

"The acceptance of the rule of law as a constitutional principle requires that a citizen, before committing himself to any course of action, should be able to know in advance what are the legal consequences that will flow from it." (Lord Diplock, Black-Clawson [1975])

The Constitutional Reform Act 2005 refers in S.1 to:

“the existing constitutional principle of the rule of law”.

For content monitoring obligations the quality of law has two facets, reflecting the potential impact of the obligations on the fundamental rights of both platforms and users.

The platform aspect is written in to the Act itself:

“the measures described in the code of practice must be sufficiently clear, and at a sufficiently detailed level, that providers understand what those measures entail in practice”. (Schedule 4)

The user aspect is not spelled out in the Act but is no less significant for that. Where a user’s freedom of speech may be affected by steps that a platform takes to comply with its duties, any interference with the user’s right of freedom of expression must be founded on a clear and precise rule.

That means that a user must be able to foresee in advance with reasonable certainty whether something that they have in mind to post is or is not liable to be blocked, removed or otherwise affected as a result of the obligations that the Act places on the platform.

That is not simply a matter of users themselves taking care to comply with substantive law when they consider posting content. The Act interpolates platforms into the process and may require them to make judgements about whether the user’s post is or is not illegal. Foreseeability is therefore a function both of the substantive law and of the rules about how a platform should make those judgements.

If, therefore, the mechanism set up by the Act and Ofcom for platforms to evaluate, block and take down illegal content is likely to result in unpredictable, arbitrary determinations of what is and is not illegal, then the mechanism fails the ‘prescribed by law’ test and is a per se violation of the right of freedom of expression.

Equally, if the regime is so unclear about how it would operate in practice that a court is not in a position to assess its proportionality, that would also fail the ‘prescribed by law’ test. That is the import of Lord Sumption’s observations in Catt and Gallagher (above).

A prescriptive bright-line rule, however disproportionate it might be, would satisfy the ‘prescribed by law’ test and fall to be assessed only by reference to necessity and proportionality. Ofcom’s principles-based recommendations, however, are at the opposite end of the spectrum: they are anything but a bright-line rule. The initial ‘prescribed by law’ test therefore comes into play.

How do Ofcom’s proposed measures stack up?

Service providers themselves would decide how accurate the technology has to be, what proportion of content detected by the technology should be subjected to human review, and what is an acceptable level of false positives.

Whilst Ofcom specifies various ‘proactive technology criteria’, those are expressed as qualitative factors to be taken into account, not quantitative parameters. Ofcom does not specify what might be an appropriate balance between precision and recall, nor what is an appropriate proportion of human review of detected content.

Nor does Ofcom indicate what level of false positives might be so high as to render the technology (alone, or in combination with associated procedures) insufficiently accurate.

Examples of Ofcom’s approach include:

“However, there are some limitations to the use of proactive technology in detecting or supporting the detection of the relevant harms. For example, proactive technology does not always deal well with nuance and context in the same way as humans.

However, we mitigate this through the proactive technology criteria which are designed to ensure proactive technology is deployed in a way that ensures an appropriate balance between precision and recall, and that an appropriate proportion of content is reviewed by humans.” (Consultation, para 9.92)

“Where a service has a higher tolerance for false positives, more content may be wrongly identified. … The extent of false positives will depend on the service in question and the way in which it configures its proactive technology. The measure allows providers flexibility in this regard, including as to the balance between precision and recall (subject to certain factors set out earlier in this chapter).” (Consultation, paras 9.135, 9.136)

“… when determining what is an appropriate proportion of detected content to review by humans, providers have flexibility to decide what proportion of detected content it is appropriate to review, however in so doing, providers should ensure that the following matters are taken into account…” (Consultation, para 9.19)

“However, in circumstances where false positives are consistently high and cannot be meaningfully reduced or mitigated, particularly where this may have a significant adverse impact on user rights, providers may conclude that the proactive technology is incapable of meeting the criteria.” (Proactive Technology Draft Guidance, para 5.19)

How high is high? How significant is significant? No answer is given, other than that the permissible level of false positives is related to the nature of the subsequent review of detected content. As we shall see, the second stage review does not require all content detected by the proactive technology to be reviewed by human beings. The review could, seemingly, be conducted by a second automated system.

The result is that two service providers in similar circumstances could arrive at completely different conclusions as to what constitutes an acceptable level of legitimate speech being blocked or taken down. Ofcom acknowledges that the flexibility of its scheme:

“could lead to significant variation in impact on users’ freedom of expression between services”. (Consultation, para 9.136)

That must raise questions about the predictability and foreseeability of the regime.

If the impact on users’ expression is not reasonably foreseeable, that is a quality of law failure and no further analysis is required. If that hurdle were surmounted, there is still the matter of what level of erroneous blocking and removal would amount to a disproportionate level of interference with users’ legitimate freedom of expression. 

Proportionality?

Ofcom concludes that:

“Having taken account of the nature and severity of the harms in question, the principles we have built into the measure to ensure that the technology used is sufficiently accurate, effective and lacking in bias, and the wider range of safeguards provided by other measures, we consider overall that the measure’s potential interference to users’ freedom of expression to be proportionate.” (Consultation, para 9.154)

However, it is difficult to see how Ofcom (or anyone else) can come to any conclusion as to the overall proportionality of the recommended principles-based measures when they set no quantitative or concrete parameters for precision versus recall, accuracy of review of suspect content, or an ultimately acceptable level of false positives.

Ofcom’s discussion of human rights compliance starts with proportionality. While it notes that the interference must be ‘lawful’, there is no substantive discussion of the ‘prescribed by law’ threshold.

Prior restraint

Finally, on the matter of human rights compatibility, proactive detection and filtering obligations constitute a species of prior restraint (Yildirim v Turkey (ECtHR), Poland v The European Parliament and Council (CJEU)).

Prior restraint is not impermissible. However, it does require the most stringent scrutiny and circumscription, in which risk of removal of legal content will loom large. The ECtHR in Yildirim noted that “the dangers inherent in prior restraints are such that they call for the most careful scrutiny on the part of the Court”.

The proactive technology criteria

Ofcom’s proactive technology criteria are, in reality, framed not as a set of criteria but as a series of factors that the platform should take into account.  Ofcom describes them as “a practical, outcomes-focused set of criteria.” [Consultation, para 9.13]

Precision and recall One criterion is that the technology has been evaluated using “appropriate” performance metrics and

“configured so that its performance strikes an appropriate balance between precision and recall”.  (Recommendation C11.3(c))

Ofcom evidently must have appreciated that, without elaboration, “appropriate” was an impermissibly vague determinant. The draft Code of Practice goes on (Recommendation C11.4(a)):

“when configuring the technology so that it strikes an appropriate balance between precision and recall, the provider should ensure that the following matters are taken into account:

i) the service’s risk of relevant harm(s), reflecting the risk assessment of the service and any information reasonably available to the provider about the prevalence of target illegal content on the service;

ii) the proportion of detected content that is a false positive;

iii) the effectiveness of the systems and/or processes used to identify false positives; and

iv) in connection with CSAM or grooming, the importance of minimising the reporting of false positives to the National Crime Agency (NCA) or a foreign agency;”

These factors may help a service provider tick the compliance boxes – ‘Yes, I have taken these factors into account’ - but they do not amount to a concrete determinant of what constitutes an appropriate balance between precision and recall.

Review of detected content Accuracy of the proactive technology is, as already alluded to, only the first stage of the recommended process. The service provider has to treat a detected item as providing ‘reason to suspect’ that it is illegal content, then move on to a second stage: review.

“Where proactive technology detects or supports the detection of illegal content and/or content harmful to children, providers should treat this as reason to suspect that the content may be target illegal content and/or content harmful to children.

Providers should therefore take appropriate action in line with existing content moderation measures, namely ICU C1 and ICU C2 (in the Illegal Content User-to-user Codes of Practice) and PCU C1 and PCU C2 (in the Protection of Children User-to-user Code of Practice), as applicable.” (Consultation, para 9.74)

That is reflected in draft Codes of Practice paras ICU C11.11, 12.9 and PCU C9.9, 10.7. For example:

“ICU C11.11 Where proactive technology detects, or supports the detection of, target illegal content in accordance with ICU C11.8(a), the provider should treat this as reason to suspect that the content may be illegal content and review the content in accordance with Recommendation ICU C1.3.”

‘Review’ does not necessarily mean human review. Compliance with the proactive technology criteria requires that:

“...policies and processes are in place for human review and action is taken in accordance with that policy, including the evaluation of outputs during development (where applicable), and the human review of an appropriate proportion of the outputs of the proactive technology during deployment. Outputs should be explainable to the extent necessary to support meaningful human judgement and accountability.” (Emphasis added) (draft Code of Practice Recommendation ICU C11.3(g))

The consultation document says:

“It should be noted that this measure does not itself recommend the removal of detected content. Rather, it recommends that providers moderate detected content in accordance with existing content moderation measures (subject to human review of an appropriate proportion of detected content, as mentioned above).” (Consultation, para 9.147)

And:

“Providers have flexibility in deciding what proportion of detected content is appropriate to review, taking into account [specified factors]” (Consultation, para 9.145)

Ofcom has evidently recognised that “appropriate proportion” is, without elaboration, another impermissibly vague determinant. It adds (Recommendation C11.4(b)):

“when determining what is an appropriate proportion of detected content to review by humans, the provider should ensure that the following matters are taken into account:

i) the principle that the resource dedicated to review of detected content should be proportionate to the degree of accuracy achieved by the technology and any associated systems and processes;

ii) the principle that content with a higher likelihood of being a false positive should be prioritised for review; and

iii) in the case of CSAM or grooming, the importance of minimising the reporting of false positives to the NCA or a foreign agency.”

As with precision and recall, these factors may help a service provider tick the compliance boxes but are not a concrete determinant of the proportion of detected content to be submitted to human review in any particular case.

Second stage review – human, more technology or neither?

The upshot of all this appears to be that content detected by the proactive technology should be subject to review in accordance with the Code of Practice moderation recommendations; and that an ‘appropriate proportion’ of that content should be subject to human review.

But if only an ‘appropriate proportion’ of content detected by the proactive technology has to be subject to human review, how is the rest to be treated? Since it appears that some kind of ‘appropriate action’ is contemplated in accordance with Ofcom’s content moderation recommendations, the implication appears to be that moderation at the second stage could be by some kind of automated system.

In that event it would seem that the illegal content judgement itself would be made by that second stage technology in accordance with Recommendation C1.3.

Recommendation C1.3, however, does not stipulate the accuracy of second stage automated technology. The closest that the Code of Practice comes is ICU C4.2 and 4.3:

“The provider should set and record performance targets for its content moderation function, covering at least:

a) the time period for taking relevant content moderation action; and

b) the accuracy of decision making.

In setting its targets, the provider should balance the need to take relevant content moderation action swiftly against the importance of making accurate moderation decisions.”

Once again, the path appears to lead to an unpredictable balancing exercise by a service provider.

Curiously, elsewhere Ofcom appears to suggest that second stage “complementary tools” could in some cases merely be an ‘additional safeguard’:

“What constitutes an appropriate balance between precision and recall will depend on the nature of the relevant harm, the level of risk identified and the service context. For example, in some cases a provider might optimise for recall to maximise the quantity of content detected and apply additional safeguards, such as use of complementary tools or increased levels of human review, to address false positives. In other cases, higher precision may be more appropriate, for example, to reduce the risk of adverse impacts on user rights.” (Proactive Technology Draft Guidance, para 5.18)

If the implication of ‘in some cases’ is that in other cases acting on the output of the proactive technology without a second stage review would suffice, that would seem to be inconsistent with the requirement that all detected content be subject to some kind of moderation in accordance with Recommendation C1.3.

Moreover, under Ofcom’s scheme proactive technology is intended only to provide ‘reason to suspect’ illegality. That would not conform to the standard stipulated by the Act for an illegal content judgement: ‘reasonable grounds to infer’.

Conclusion

When, as Ofcom recognises, the impact on users’ freedom of expression will inevitably vary significantly between services, and Ofcom’s documents do not condescend to what is or is not an acceptable degree of interference with legitimate speech, it is difficult to see how a user could predict, with reasonable certainty, how their posts are liable to be affected by platforms’ use of proactive technology in compliance with Ofcom’s principles-based recommendations.

Nor is it easy to see how a court would be capable of assessing the proportionality of the measures. As Lord Sumption observed, the regime should not be couched in terms so vague or so general as, substantially, to confer a discretion so broad that its scope is in practice dependent on the will of those who apply it. Again, Ofcom's acknowledgment that the flexibility of its scheme could lead to significant variation in impact on users’ freedom of expression does not sit easily with that requirement.  

Ofcom, it should be acknowledged, is to an extent caught between a rock and a hard place. It has to avoid being overly technology-prescriptive, while simultaneously ensuring that the effects of its recommendations are reasonably foreseeable to users and capable of being assessed for proportionality. Like much else in the Act, that may in reality be an impossible circle to square. That does not bode well for the Act’s human rights compatibility.

[Amended 6 August 2025 to add ‘principles-based’ to the first paragraph of the Conclusion.]


Sunday, 13 July 2025

The Ordinary Reasonable Person encounters (or not) cyber-abuse

The recent decision of the Australian Administrative Review Tribunal in X Corp and Elston v eSafety Commissioner illustrates the complexities that can arise when the law tasks a regulator or platform to adjudge an online post.

The decision grapples with a dilemma that is familiar, albeit under a very different legislative regime, from the UK’s Online Safety Act 2023. It is also features in the police takedown notice scheme for unlawful knives and other weapons content contained in the Crime and Policing Bill (currently making its way through Parliament).

At a high level, the issue is how to achieve rapid removal of impugned user content (typically because it is illegal under the general law or defined as harmful in some way), while not affecting legitimate posts. The specific challenge is that the contents of the post alone are often insufficient to determine whether the legal line has been crossed. Contextual information, which may be off-platform and involve investigation, is required. The Elston case provides a vivid illustration.

The twin imperatives of rapid removal and adequate investigation of context stand in conflict with each other. A regime that requires contravention to be adjudged solely on the contents of a post, ignoring external context, is likely to be either ineffectual or overreaching, depending on which way the adjudicator is required to jump in the absence of relevant information.

Australia’s Online Safety Act 2021 empowers the eSafety Commissioner, but only following receipt of a complaint, to issue a content removal notice to a social media platform if she is satisfied that a user’s post constitutes cyber-abuse material targeted at an Australian adult. (In this respect the Australian legislation resembles the UK Crime and Policing Bill more than our Online Safety Act: Ofcom has no power under the OSA to require removal of a specific item of user content. The Crime and Policing Bill will institute a regime of police takedown notices for unlawful knives and other weapons content, albeit not predicated on receipt of a complaint.)

Cyber-abuse material under the Australian Act has two key elements. The eSafety Commissioner has to be satisfied of both before issuing a removal notice:

Intention Element: an ordinary reasonable person would conclude that it is likely that the material was intended to have an effect of causing serious harm to a particular Australian adult.

Offense Element: an ordinary reasonable person in the position of the Australian adult would regard the material as being, in all the circumstances, menacing, harassing or offensive.

Serious harm is defined as serious physical harm or serious harm to a person’s mental health, whether temporary or permanent. Serious harm to a person’s mental health includes:

(a) serious psychological harm; and

(b) serious distress;

but does not include mere ordinary emotional reactions such as those of only distress, grief, fear or anger.

The need to assess what an ‘ordinary reasonable person’ would think is common to both elements. For the Intention Element the Ordinary Reasonable Person has to determine the likely intention of the person who posted the material. For the Offense Element, in order to determine how the material should be regarded, the Ordinary Reasonable Person has to be put in the position of the Australian adult putatively intended to be targeted.

The reason why the legislation hypothesises an Ordinary Reasonable Person is to inject some objectivity into what could otherwise be an overly subjective test.

The Tribunal observed that the Intention Element converted what would otherwise be “a broadly available censorship tool based on emotional responses to posted material” into a provision that “protects people from a much narrower form of conduct where causing serious harm to a particular person was, in the relevant sense, intended” [21]. (This has similarities to the heavy lifting done by the mental element in broadly drafted terrorism offences.)

We are in familiar legal territory with fictive characters such as the Ordinary Reasonable Person. It is reminiscent of the fleeting appearance of the Person of Ordinary Sensibilities in the draft UK Online Safety Bill.

Nevertheless, as the Tribunal decision illustrates, the attributes of the hypothetical person may need further elucidation. Those characteristics can materially affect the balance between freedom of expression and the protective elements of the legislation in question.

Thus, what is the Ordinary Reasonable Person taken generally to know?  What information can the Ordinary Reasonable Person look at in deciding whether intention to cause serious harm is likely? How likely is likely?

The information available to the Ordinary Reasonable Person

The question of what information can, or should, be taken into account is especially pertinent to legislation that requires moderation decisions to be made that will impinge on freedom of expression. The Tribunal posed the question thus:

“… whether findings on the Intention Element should be made on an impressionistic basis after considering a limited range of material, or whether findings should be made after careful consideration, having regard to any evidence obtained as part of any investigation or review process.” [45]

It found that:

“The history and structure of the provisions suggest that while impressionistic decision-making may be authorised in the first instance, early decisions made on limited information can and should be re-visited both internally and externally as more information becomes available, including as a result of input from the affected end-user.” [45]

That was against the background that:

“…the legislation as passed allows for rapid decision making by the Commissioner to deal with material that appears, on its face, to be within a category that the Act specified could be the subject of a removal notice. However, once action has been taken, the insertion of s 220A confirms that Parliament accepted that there needed to be an opportunity for those affected by the action to have an opportunity to address whether the material was actually within the prohibited statutory category. External review by the Tribunal was provided for with the same end in mind.” [44]

The UK Online Safety Act states that a platform making an illegality judgement should do so on the basis of all relevant information reasonably available to it. Ofcom guidance fleshes out what information is to be regarded as reasonably available.

The UK Crime and Policing Bill says nothing about what information a police officer giving an unlawful weapons content removal notice, or a senior officer reviewing such a notice, should seek out and take into account. Nor does it provide any opportunity for the user whose content is condemned to make representations, or to be notified of the decision.

Generally speaking, the less information that can or should be taken into account, the greater the likelihood of arbitrary decision-making and consequent violation of freedom of expression rights.

In the Elston case three different variations on the Ordinary Reasonable Person were put to the Tribunal. The eSafety Commissioner argued that the Ordinary Reasonable Person should be limited to considering the poster’s profile on X and the material constituting the post. The poster’s subsequent evidence about his intention and motivations was irrelevant to determining whether the Intention Element was satisfied. The same was said to apply to evidence about the poster’s knowledge of the Australian person said to be targeted. (The Tribunal observed that that would mean that even material contained in the complaint that preceded the removal notice would be excluded from consideration.)

As to the general knowledge of the Ordinary Reasonable Person, the eSafety Commissioner argued that (for the purposes of the case before the Tribunal, which concerned a post linking to and commenting on a newspaper article about a transgender person) the Ordinary Reasonable Person would be aware that material on X can bully individuals, would understand that public discourse around sexuality and gender can be polarising as well as emotionally charged; and would understand that calling a transgender man a woman would be to act contrary to that transgender man’s wishes.

X Corp argued that the decisionmaker was entitled to have regard to evidence (including later evidence) concerning immediate context as at the time of the post, but not more. The facts which could be known to the ordinary reasonable person when making their assessment included facts about the subject of the post or the poster, what their relationship was at the time of the post, but not evidence about what happened after.

The significance of the different positions was that on X Corp’s case, later evidence could be taken into account to the effect that the poster did not know, or know of, the person who was the subject of the post until he read the newspaper article. That was not apparent from the post itself or the poster’s profile.

Mr Elston (the poster) argued that a wide range of material could be acquired and treated as available to the ordinary reasonable person when asked to decide whether the material posted ‘was intended to have an effect of causing serious harm’.

On this view of the statutory power, evidence obtained before or after the post, during the course of the investigation and concerning matters that occurred after the post was made, could be treated as available to the Ordinary Reasonable Person when considering the Intention Element.

On this approach, Mr Elston’s own evidence about his intention would be “relevant to consider, but not necessarily conclusive of what an ordinary reasonable person would conclude about his intention.” [62]

The Tribunal agreed with Mr Elston’s approach:

“The existence of the investigative powers available to the Commissioner and the complaint-based nature of the power provide a powerful basis for concluding that the Commissioner and the Tribunal should be feeding all of the available evidence into the assessment of what the ‘ordinary reasonable person’ would conclude was likely before determining whether the Intention Element is satisfied.” [74]

It added:

“The Parliament was concerned to give end-users an opportunity to address claims about their conduct both on internal review and by providing review in the Tribunal. To read the ordinary reasonable person lens as a basis for disregarding evidence submitted by either the complainant or the end-user or discovered by the Commissioner during an investigation is not consistent with the fair, high quality decision-making the Parliament made provision for.” [77]

The Tribunal then spelled out the consequences of the Commissioner’s approach:

“…In many circumstances, including this case, limiting the information that can be considered by the ‘ordinary reasonable person’ to the post and closely related material, results in critical information not being available.” [81]

It went on:

“In this case, there is no evidence in any of the material posted and associated with the post, that the post was ever brought to the attention of Mr Cook [the complainant]. …

That Mr Cook was aware of the post is only discoverable by reference to the complaint submitted to the Commissioner. If a decision maker is restricted to knowing that a post was made to a limited audience, none of whom included Mr Cook, reaching the conclusion that the material was intended to cause serious harm to Mr Cook is going to be difficult. In those circumstances, where there appears to be no evidence to which the decision maker can have regard in order to make a finding that the post came to Mr Cook’s attention, let alone was intended to come to his attention, a decision to issue a removal notice could not be sustained.” [81]

The Tribunal reiterated:

“In many cases, it will be the complaint that provides critical context to allow an ordinary reasonable person to conclude that serious harm was intended.” [81]

The Tribunal concluded that evidence about what happened after the post was posted could be relevant if it shed light on the likely intention of the poster. Similarly, evidence about prior behaviour of third parties in response to certain posts could be relevant, even if it was only discoverable by the regulator using compulsory powers:

“So long as evidence sheds light on the statutory question, then it can and should be considered. It would be inappropriate in advance of a particular factual scenario being presented to the decision-maker to say that there are whole categories of evidence that cannot be considered because the statutory test in all circumstances renders the material irrelevant.” [87]

Nevertheless, that did not mean that the concept of the ‘ordinary and reasonable’ person had no effect:

“It moves the assessment away from a specific factual inquiry concerning the actual thought process of the poster and what effect they intended to achieve by the post. I must undertake a more abstract inquiry about what an independent person (who isn’t me) would think was the poster’s intention having regard to the available evidence. Provided evidence is relevant to that question, then it can and should be considered.” [89]

Whilst specific to the Australian statute and its fictive Ordinary Reasonable Person, this discussion neatly illustrates the point that has repeatedly been made (and often ignored): that platform judgements as to illegality required by the UK Online Safety Act will very often require off-platform contextual information and cannot sensibly be made on the basis of a bare user post and profile.

The point assumes greater significance with real-time proactive automated content moderation – something that Ofcom is proposing to extend – which by its very nature is unlikely to have access to off-platform contextual information.

The discussion also speaks eloquently to the silence of the Crime and Policing Bill on what kind and depth of investigation a police officer should conduct in order to be satisfied as to the presence of unlawful weapons content.

Likelihood of serious harm

The other significant point that the Tribunal had to consider was what the statute meant by ‘likely’ that serious harm was intended. The rival contentions were ‘real chance’ and ‘more probable than not’. The Tribunal held that, in the statutory context, the latter was right. The conclusion is notable for acknowledging the adverse consequences for freedom of expression of adopting a lower standard:

“A finding by the ordinary reasonable person that a person was setting out to cause serious harm to another is a serious, adverse finding with implications for freedom of expression. It is not the kind of finding that should be made when it is only possible that serious harm was intended.” [119]

The standard set by the UK Online Safety Act for making content illegality judgements is “reasonable grounds to infer”. It remains questionable, to say the least, whether that standard is compatible with ECHR Article 10. The Crime and Policing Bill says no more than that the police officer must be ‘satisfied’ that the material is unlawful weapons content.  

The Tribunal’s conclusion

On the facts of the case, the Tribunal concluded that an ordinary reasonable person in the position of the complainant Mr Cook would regard the post as offensive; but that the Intention Element was not satisfied. That depended crucially on the broader contextual evidence:

“Read in isolation, the post looks to be an attempt to wound Mr Cook and upset him and cause him distress, perhaps even serious distress. If an ordinary reasonable person was only aware of the post, then it may be open to find that the poster’s intention was likely to be to cause serious harm to Mr Cook. However, when the broader context is known and understood, it is difficult to read the post as intended to harm Mr Cook, or intended to have others direct criticism towards Mr Cook or designed to facilitate vitriol by spreading personal information about him.” [191]

Amongst the broader context was lack of evidence that the poster intended the post to come to Mr Cook’s attention.

“For the post to do any harm it needed to be read by Mr Cook. While I am satisfied that Mr Elston was indifferent to whether the post did come to Mr Cook’s attention and indifferent to whether or not it distressed him, there is no evidence to support the conclusion that the post was made with the intention of it being brought to Mr Cook’s attention.” [197]

Part of the reasoning behind that conclusion was that Mr Elston’s post did not tag Mr Cook’s user handle, but only that of the World Health Organisation (which had appointed Mr Cook to an advisory panel):

“ It is notable that Mr Elston only included the handle for the WHO in his post and there is nothing in the body of the post that attempts to facilitate the contacting of Mr Cook by Mr Elston’s followers. Mr Cook’s name is not used in the body of the post.” [200]

Overall, the Tribunal concluded:

“When the evidence is considered as a whole I am not satisfied that an ordinary reasonable person would conclude that by making the post Mr Elston intended to cause Mr Cook serious harm. In the absence of any evidence that Mr Elston intended that Mr Cook would receive and read the post, and in light of the broader explanation as to why Mr Elston made the post, I am satisfied that an ordinary reasonable person would not conclude that that it is likely that the post was intended to have an effect of causing serious harm to Mr Cook.” [207]

For present purposes the actual result in the Elston case matters less than the illustration that it provides of what can be involved in making judgements about removal or blocking of posts against a statutory test: whether that evaluation be done by a regulator, a platform discharging a duty imposed by statute or (in the likely future case of unlawful weapons content) the police.


Tuesday, 11 February 2025

The Online Safety Act grumbles on

Policymakers sometimes comfort themselves that if no-one is completely satisfied, they have probably got it about right. 

On that basis, Ofcom’s implementation of the Online Safety Act’s illegality duties must be near-perfection: the Secretary of State (DSIT) administering a sharp nudge with his draft Statement of Strategic Priorities, while simultaneously under fire for accepting Ofcom’s advice on categorisation of services; volunteer-led community forums threatening to close down in the face of perceived compliance burdens; and many of the Act’s cheerleaders complaining that Ofcom’s implementation has so far served up less substantial fare than they envisaged. 

As of now, an estimated 25,000 UK user-to-user and search providers (plus another 75,000 around the world) are meant to be busily engaged in getting their Illegal Harms risk assessments finished by 16 March. 

Today is Safer Internet Day. So perhaps spare a thought for those who are getting to grips with core and enhanced inputs, puzzling over what amounts to a ‘significant’ number of users, learning that a few risk factors may constitute ‘many’ (footnote 74 to Ofcom’s General Risk Level Table), or wondering whether their service can be ‘low risk’ if they allow users to post hyperlinks.  (Ofcom has determined that hyperlinks are a risk factor for six of the 17 kinds of priority offence designated by the Act: terrorism, CSEA, fraud and financial services, drugs and psychoactive substances, encouraging or assisting suicide and foreign interference offences). 

Grumbles from whichever quarter will come as no great surprise to those (this author included) who have argued from the start that the legislation is an ill-conceived, unworkable mess which was always destined to end in tears. Even so, and making due allowance for the well-nigh impossible task with which Ofcom has been landed, there is an abiding impression that Ofcom’s efforts to flesh out the service provider duties - risk assessment in particular – could have been made easier to understand. 

The original illegal harms consultation drew flak for its sheer bulk: a tad over 1,700 pages. The final round of illegal harms documents is even weightier: over 2,400 pages in all. It is in two parts. The first is a Statement. In accordance with Ofcom’s standing consultation principles, it aims to explain what Ofcom is going to do and why, showing how respondents’ views helped to shape Ofcom’s decisions. That amounts to 1,175 pages, including two summaries. 

The remaining 1,248 pages consist of statutory documents: those that the Act itself requires Ofcom to produce. These are a Register of Risks, Risk Assessment Guidance, Risk Profiles, Record Keeping and Review Guidance, a User to User Illegal Content Code of Practice, a Search Service Illegal Content Code of Practice, Illegal Content Judgements Guidance, Enforcement Guidance, and Guidance on Content Communicated Publicly and Privately. Drafts of the two Codes of Practice were laid before Parliament on 16 December 2024. Ofcom can issue them in final form upon completion of that procedure.

When it comes to ease of understanding, it is tempting to go on at length about the terminological tangles to be found in the documents, particularly around ‘harm’, ‘illegal harm’ and ‘kinds of illegal harm’. But really, what more is worth saying? Ofcom’s documents are, to all intents and purposes, set in stone. Does it help anyone to pen another few thousand words bemoaning opaque language? Other than in giving comfort that they are not alone to those struggling to understand the documents, probably not. Everyone has to get on and make the best of it.

So one illustration will have to suffice. ‘Illegal harm’ is not a term defined or used in the Act. In the original consultation documents Ofcom’s use of ‘illegal harm’ veered back and forth between the underlying offence, the harm caused by an offence, and a general catch-all for the illegality duties; often leaving the reader to guess in which sense it was being used. 

The final documents are improved in some places, but introduce new conundrums in others. One of the most striking examples is paragraph 2.35 and Table 6 of the Risk Assessment Guidance (emphasis added to all quotations below). 

Paragraph 2.35 says: 

“When evaluating the likelihood of a kind of illegal content occurring on your service and the chance of your service being used to commit or facilitate an offence, you should ask yourself the questions set out in Table 6.”

Table 6 is headed: 

“What to consider when assessing the likelihood of illegal content

The table then switches from ‘illegal content’ to ‘illegal harm’. The first suggested question in the table is whether risk factors indicate that: 

“this kind of illegal harm is likely to occur on your service?” 

‘Illegal harm’ is footnoted with a reference to a definition in the Introduction: 

“the physical or psychological harm which can occur from a user encountering any kind of illegal content…”. 

So what is the reader supposed to be evaluating: the likelihood of occurrence of illegal content, or the likelihood of physical or psychological harm arising from such content? 

If ‘Illegal Harm’ had been nothing more than a title that Ofcom gave to its illegality workstream, then what the term actually meant might not have mattered very much. But the various duties that the Act places on service providers, and even Ofcom’s own duties, rest on carefully crafted distinctions between illegal content, underlying criminal offences and harm (meaning physical or psychological harm) arising from such illegality. 

That can be seen in this visualisation. It illustrates the U2U service provider illegality duties - both risk assessment and substantive - together with the Ofcom duty to prepare an illegality Risks Register and Risk Profiles.  The visualisation divides the duties into four zones (A, B, C and D), explained below. 

A: The duties in this zone require U2U providers to assess certain risks related to illegal content (priority and non-priority). These risks are independent of and unrelated to harm. The risks to be assessed have no direct counterpart in any of the substantive safety duties in Section 10. Their relevance to those safety duties probably lies in the proportionality assessment of measures to fulfil the Section 10 duties. 

Although the service provider’s risk assessment has to take account of the Ofcom Risk Profile that relates to its particular kind of service, Ofcom’s Risk Profiles are narrower in scope than the service provider risk assessment. Under the Act Ofcom’s Risks Register and Risk Profiles are limited to the risk of harm (meaning physical or psychological harm) to individuals in the UK presented by illegal content present on U2U services and by the use of such services for the commission or facilitation of priority offences. 

B:  This zone contains harm-related duties (identified in yellow): Ofcom Risk Profiles, several service provider risk assessment duties framed by reference to harm, plus the one substantive Section 10 duty framed by reference to harm (fed by the results of the harm-related risk assessment duties). Harm has its standard meaning in the Act: physical or psychological harm. 

C: This zone contains two service provider risk assessment duties which are independent of and unrelated to risk of harm, but which feed directly into a corresponding substantive Section 10 duty. 

D: This zone contains the substantive Section 10 duties: one based on harm and three which stand alone. Those three are not directly coupled to the service provider’s risk assessment.

This web of duties is undeniably complex. One can sympathise with the challenge of rendering it into a practical and readily understandable risk assessment process capable of feeding the substantive duties.  Nevertheless, a plainer and more consistently applied approach to terminology in Ofcom's documents would have paid dividends.



Saturday, 17 June 2023

Shifting paradigms in platform regulation

[Based on a keynote address to the conference on Contemporary Social and Legal Issues in a Social Media Age held at Keele University on 14 June 2023.]

First, an apology for the title. Not for the rather sententious ‘shifting paradigms’ – this is, after all, an academic conference – but ‘platform regulation’. If ever there was a cliché that cloaks assumptions and fosters ambiguity, ‘platform regulation’ is it.

Why is that? For three reasons.

First, it conceals the target of regulation. In the context with which we are concerned users – not platforms – are the primary target. In the Online Safety Bill model, platforms are not the end. They are merely the means by which the state seeks to control – regulate, if you like - the speech of end-users.

Second, because of the ambiguity inherent in the word regulation. In its broad sense it embraces everything from the general law of the land that governs – regulates, if you like – our speech to discretionary, broadcast-style, regulation by regulator: the Ofcom model. If we think – and I suspect many don’t - that the difference matters, then to have them all swept up together under the banner of regulation is unhelpful.

Third, because it opens the door to the kind of sloganising with which we have become all too familiar over the course of the Online Harms debate: the unregulated Internet; the Wild West Web; ungoverned online spaces.

What do they mean by this?
  • Do they mean that there is no law online? Internet Law and Regulation has 750,000 words that suggest otherwise.
  • Do they mean that there is law but it is not enforced? Perhaps they should talk to the police, or look at new ways of providing access to justice.
  • Do they mean that there is no Ofcom online? That is true – for the moment - but the idea that individual speech should be subject to broadcast-style regulation rather than the general law is hardly a given. Broadcast regulation of speech is the exception, not the norm.
  • Do they mean that speech laws should be stricter online than offline? That is a proposition to which no doubt some will subscribe, but how does that square with the notion of equivalence implicit in the other studiously repeated mantra: that what is illegal offline should be illegal online?
The sloganising perhaps reached its nadir when the Joint Parliamentary Committee scrutinising the draft Online Safety Bill decided to publish its Report under the strapline: ‘No Longer the Land of the Lawless’ - 100% headline-grabbing clickbait – adding, for good measure: “A landmark report which will make the tech giants abide by UK law”.

Even if the Bill were about tech giants and their algorithms – and according to the government’s Impact Assessment 80% of in-scope UK service providers will be micro-businesses – at its core the Bill seeks not to make tech giants abide by UK law, but to press platforms into the role of detective, judge and bailiff: to require them to pass judgment on whether we – the users - are abiding by UK law. That is quite different.

What are the shifting paradigms to which I have alluded?

First the shift from Liability to Responsibility

Go back twenty-five years and the debate was all about liability of online intermediaries for the unlawful acts of their users. If a user’s post broke the law, should the intermediary also be liable and if so in what circumstances? The analogies were with phone companies and bookshops or magazine distributors, with primary and secondary publishers in defamation, with primary and secondary infringement in copyright, and similar distinctions drawn in other areas of the law.

In Europe the main outcome of this debate was the E-Commerce Directive, passed at the turn of the century and implemented in the UK in 2002. It laid down the well-known categories of conduit, caching and hosting. Most relevantly to platforms, for hosting it provided a liability shield based on lack of knowledge of illegality. Only if you gained knowledge that an item of content was unlawful, and then failed to remove that content expeditiously, could you be exposed to liability for it. This was closely based on the bookshop and distributor model.

The hosting liability regime was – and is – similar to the notice and takedown model of the US Digital Millennium Copyright Act – and significantly different from the US S.230 Communications Decency Act 1996, which was more closely akin to full conduit immunity.

The E-Commerce Directive’s knowledge-based hosting shield incentivises – but does not require – a platform to remove user content on gaining knowledge of illegality. It exposes the platform to risk of liability under the relevant underlying law. That is all it does. Liability does not automatically follow.

Of course the premise underlying all of these regimes is that the user has broken some underlying substantive law. If the user hasn’t broken the law, there is nothing that the platform could be liable for.

It is pertinent to ask – for whose benefit were these liability shields put in place? There is a tendency to frame them as a temporary inducement to grow the then nascent internet industry. Even if there was an element of that, the deeper reason was to protect the legitimate speech of users. The greater the liability burden on platforms, the greater their incentive to err on the side of removing content, the greater the risk to legitimate speech and the greater the intrusion on the fundamental speech rights of users. The distributor liability model adopted in Europe, and the S.230 conduit model in the USA, were for the protection of users as much, if not more so, than for the benefit of platforms.

The Shift to Responsibility has taken two forms.

First, the increasing volume of the ‘publishers not platforms’ narrative. The view is that platforms are curating and recommending user content and so should not have the benefit of the liability shields. As often and as loudly as this is repeated, it has gained little legislative traction. Under the Online Safety Bill the liability shields remain untouched. In the EU Digital Services Act the shields are refined and tweaked, but the fundamentals remain the same. If, incidentally, we think back to the bookshop analogy it was never the case that a bookshop would lose its liability shield if it promoted selected books in its window, or decided to stock only left wing literature.

Second, and more significantly, has come a shift towards imposing positive obligations on platforms. Rather than just being exposed to risk of liability for failing to take down users’ illegal content, a platform would be required to do so on pain of a fine or a regulatory sanction. Most significant is when the obligation takes the form of a proactive obligation: rather than awaiting notification of illegal user content, the platform must take positive steps proactively to seek out, detect and remove illegal content.

This has gained traction in the UK Online Safety Bill, but not in the EU Digital Services Act. There is in fact 1800 divergence between the UK and the EU on this topic. The DSA repeats and re-enacts the principle first set out in Article 15 of the eCommerce Directive: the EU prohibition on Member States imposing general monitoring obligations on conduits, caches and hosts. Although the DSA imposes some positive diligence obligations on very large operators, those still cannot amount to a general monitoring obligation.

The UK, on the other hand, has abandoned its original post-Brexit commitment to abide by Article 15, and – under the banner of a duty of care - has gone all out to impose proactive, preventative detection and removal duties on platforms – for public forums and also including powers for Ofcom to require private messaging services to scan for CSEA content.

Proactive obligations of this kind raise serious questions about a state’s compliance with human rights law, due to the high risk that in their efforts to determine whether user content is legal or illegal, platforms will end up taking down users’ legitimate speech at scale. Such legal duties on platforms are subject to especially strict scrutiny, since they amount to a version of prior restraint: removal before full adjudication on the merits, or – in the case of upload filtering – before publication.

The most commonly cited reason for these concerns is that platforms will err on the side of caution when faced with the possibility of swingeing regulatory sanctions. However, there is more to it than that: the Online Safety Bill requires platforms to make illegality judgements on the basis of all information reasonably available to them. But an automated system operating in real time will have precious little information available to it – hardly more than the content of the posts. Arbitrary decisions are inevitable.

Add that the Bill requires the platform to treat user content as illegal if it has no more than “reasonable grounds to infer” illegality, and we have baked-in over-removal at scale: a classic basis for incompatibility with fundamental freedom of speech rights; and the reason why in 2020 the French Constitutional Council held the Loi Avia unconstitutional.

The risk of incompatibility with fundamental rights is in fact twofold – first, built-in arbitrariness breaches the ‘prescribed by law’ or ‘legality’ requirement: that the user should be able to foresee, with reasonable certainty, whether what they are about to post is liable to be affected by the platform’s performance of its duty; and second, built-in over-removal raises the spectre of disproportionate interference with the right of freedom of expression.

From Illegality to Harm

For so long as the platform regulation debate centred around liability, it also had to be about illegality: if the user’s post was not illegal, there was nothing to bite on - nothing for which the intermediary could be held liable.

But once the notion of responsibility took hold, that constraint fell away. If a platform could be placed under a preventative duty of care, that could be expanded beyond illegality. That is what happened in the UK. The Carnegie UK Trust argued that platforms ought to be treated analogously to occupiers of physical spaces and owe a duty of care to their visitors, but extended to encompass types of harm beyond physical injury.

The fundamental problem with this approach is that speech is not a tripping hazard. Speech is not a projecting nail, or an unguarded circular saw, that will foreseeably cause injury – with no possibility of benefit – if someone trips over it. Speech is nuanced, subjectively perceived and capable of being reacted to in as many different ways as there are people. A duty of care is workable for risk of objectively ascertainable physical injury but not for subjectively perceived and contested harms, let alone more nebulously conceived harms to society. The Carnegie approach also glossed over the distinction between a duty to avoid causing injury and a duty to prevent others from injuring each other (imposed only exceptionally in the offline world).

In order to discharge such a duty of care the platform would have to balance the interests of the person who claims to be traumatised by reading something to which they deeply object, against the interests of the speaker, and against the interests of other readers who may have a completely different view of the merits of the content.

That is not a duty that platforms are equipped, or could ever have the legitimacy, to undertake; and if the balancing task is entrusted to a regulator such as Ofcom, that is tantamount to asking Ofcom to write a parallel statute book for online speech – something which many would say should be for Parliament alone.

The misconceived duty of care analogy has bedevilled the Online Harms debate and the Bill from the outset. It is why the government got into such a mess with ‘legal but harmful for adults’ – now dropped from the Bill.

The problems with subjectively perceived harm are also why the government ended up abandoning its proposed replacement for S.127(1) of the Communications Act 2003: the harmful communications offence.

From general law to discretionary regulation

I started by highlighting the difference between individual speech governed by the general law and regulation by regulator. We can go back to the 1990s and find proposals to apply broadcast-style discretionary content regulation to the internet. The pushback was equally strong. Broadcast-style regulation was the exception, not the norm. It was borne of spectrum scarcity and had no place in governing individual speech.

ACLU v Reno (the US Communications Decency Act case) applied a medium-specific analysis to the internet and placed individual speech – analogised to old-style pamphleteers – at the top of the hierarchy, deserving of greater protection from government intervention than cable or broadcast TV.

In the UK the key battle was fought during the passing of the Communications Act 2003, when the internet was deliberately excluded from the content remit of Ofcom. That decision may have been based more on practicality than principle, but it set the ground rules for the next 20 years.

It is instructive to hear peers with broadcast backgrounds saying what a mistake it was to exclude the internet from Ofcom’s content remit in 2003 - as if broadcast is the offline norm and as if Ofcom makes the rules about what we say to each other in the street.

I would suggest that the mistake is being made now – both by introducing regulation by regulator and in consigning individual speech to the bottom of the heap.

From right to risk

The notion has gained ground that individual speech is a fundamental risk, not a fundamental right: that we are not to be trusted with the power of public speech, it was a mistake ever to allow anyone to speak or write online without the moderating influence of an editor, and by hook or by crook the internet genie must be stuffed back in its bottle.

Other shifts

We can detect other shifts. The blossoming narrative that if someone does something outrageous online, the fault is more with the platform than with the perpetrator. The notion that platforms have a greater responsibility than parents for the online activities of children. The relatively recent shift towards treating large platforms as akin to public utilities on which obligations not to remove some kinds of user content can legitimately be imposed. We see this chiefly in the Online Safety Bill’s obligations on Category 1 platforms in respect of content of democratic importance, news publisher and journalistic content.

From Global to Local

I want to finish with something a little different: the shift from Global to Local. Nowadays we tend to have a good laugh at the naivety of the 1990s cyberlibertarians who thought that the bits and bytes would fly across borders and there was not a thing that any nation state could do about it.

Well, the nation states had other ideas, starting with China and its Great Firewall. How successfully a nation state can insulate its citizens from cross-border content is still doubtful, but perhaps more concerning is the mindset behind an increasing tendency to seek to expand the territorial reach of local laws online – in some cases, effectively seeking to legislate for the world.

In theory a state may be able to do that. But should it? The ideal is peaceful coexistence of conflicting national laws, not ever more fervent efforts to demonstrate the moral superiority and cross-border reach of a state’s own local law. Over the years a de facto compromise had been emerging, with the steady expansion of the idea that you engage the laws and jurisdiction of another state only if you take positive steps to target it. Recently, however, some states have become more expansive – not least in their online safety legislation.

The UK Online Safety Bill is a case in point, stipulating that a platform is in-scope if it is capable of being used in the United Kingdom by individuals, and there are reasonable grounds to believe that there is a material risk of significant harm to individuals in the United Kingdom presented by user content on the site.

That is close to a ‘mere accessibility’ test – but not as close as the Australian Online Safety Act, which brings into scope any social media site accessible from Australia.

There has long been a consensus against ‘mere accessibility’ as a test for jurisdiction. It leads either to geo-fencing of websites or to global application of the most restrictive common content denominator. That consensus seems to be in retreat.

Moreover, the more exorbitant the assertion of jurisdiction, the greater the headache of enforcement. Which in turn leads to what we see in the UK Online Safety Bill, namely provisions for disrupting the activities of the non-compliant foreign platform: injunctions against support services such as banking or advertising, and site blocking orders against ISPs.

The concern has to be that in their efforts to assert themselves and their local laws online, nation states are not merely re-erecting national borders with a degree of porosity, but erecting Berlin Walls in cyberspace.


Monday, 31 January 2022

Internet legal developments to look out for in 2022

Another instalment of my annual round-up of what is on the horizon for UK internet law [Updated 29 April and 2 November 2022]. It does stray a little beyond our shores, noting some significant EU developments (pre-Brexit habits die hard). As always, it does not include data protection (too big, not really my field).

Draft Online Safety Bill The UK government published its draft Online Safety Bill in May 2021. The Parliamentary Joint Pre-Legislative Scrutiny Committee published its report on the draft Bill on 14 December 2021. A sub-committee of the Commons DCMS Select Committee also published a report on 24 January 2022, as did the Lords Communications and Digital Committee Inquiry on Freedom of Expression Online on 22 July 2021.

The government is expected to introduced a Bill into Parliament by on 17 March 2022. The Bill had its Second Reading on 19 April 2022. Its Report Stage is paused, likely to be recommenced this month.  Among many things for which the draft legislation is notable, its abandonment of the ECD Article 15 prohibition on general monitoring obligations stands out.

EU Digital Services Act The European Commission published its proposals for a Digital Services Act and a Digital Markets Act on 15 December 2020. The proposed Digital Services Act includes replacements for Articles 12 to 15 of the ECommerce Directive.  Following a vote in the European Parliament on 20 January 2022, the proposed legislation will now entered the trilogue stage. Political agreement was reached on 23 April 2022. The final text was published in the Official Journal on 27 October 2022.

Terrorist content The EU Regulation on addressing the dissemination of terrorist content online will come into effect on 7 June 2022.

Erosion of intermediary liability shields by omission One by-product of Brexit is that the UK is no longer bound to implement the conduit, caching and hosting shields provided by the EU eCommerce Directive. The government says that it “is committed to upholding the liability protections now that the transition period has ended”.

However, implementation of that policy requires every new piece of legislation that could impose liability on an intermediary explicitly to include the protections. If that is not done, then, owing to the fact that the original Electronic Commerce Directive Regulations 2002 do not have prospective effect, the protections will not apply to that new source of liability.

Two examples are already progressing though Parliament: the statutory codification of the public nuisance offence in the Policing Bill (which, following Royal Assent, came into force on 26 June 2022), and the electronic election imprints offences in the Elections Bill (Royal Assent 28 April 2022, not yet in force), neither of which includes the conduit, caching and hosting shields.

Such omissions have been known in the past, and were cured by statutory instrument under the European Communities Act 1972. That option is no longer available. As time goes on, accretion of such omissions in new legislation will gradually erode the intermediary protections to which the government is committed.

Law Commission Reports The Law Commission has issued two Reports making recommendations that are relevant to online speech. The first is its Report on Reform of the Communications Offences (notably, recommending replacing S.127 Communications Act 2003 and  the Malicious Communications Act 1988 with a new harm-based offence). The second report is on Hate Crime Laws. The recommendations on communications offences, at least, are being considered for incorporation have been included in the Online Safety Bill.

Copyright The Polish government’s challenge to Article 17 (Poland v Parliament and Council, Case C-401/19) is pending was decided on 26 April 2022. Poland argued that Article 17 makes it necessary for OSSPs, in order to avoid liability, to carry out prior automatic filtering of content uploaded online by users, and therefore to introduce preventive control mechanisms. It contended that such mechanisms undermine the essence of the right to freedom of expression and information and do not comply with the requirement that limitations imposed on that right be proportionate and necessary.

The Advocate-General’s Opinion was delivered on 15 July 2021. It was something of an Opinion of Solomon: recommending that the challenge be rejected, but only on the basis that the Directive is implemented in a way that minimises false positives. The Advocate General also, in a postscript, challenged aspects of the Article 17 guidance issued by the Commission subsequent to the drafting of the Opinion. The judgment largely followed the Opinion, dismissing the challenge but on the basis of an interpretation of Article 17 that included strict safeguards against removal of lawful content.

Policing Bill The Police, Crime, Sentencing and Courts Bill has ignited significant controversy over its impact on street protests, including through its statutory codification of the common law offence of public nuisance. The potential application of the new statutory offence to online speech, however, has gone virtually unnoticed.  

Product Security and Telecommunications Infrastructure Bill An honourable mention for this Bill: a framework for imposing all kinds of security requirements on (among other things) internet-connectable products.

Back from the dead? The Digital Economy Act 2017 The non-commencement of the age verification provisions of the Digital Economy Act 2017 has long been a source of controversy. In November 2021 the High Court gave permission to two members of the public to commence judicial review proceedings. This may now in practice have been overtaken by the inclusion of pornography sites in the Online Safety Bill.

Cross-border data access The US and the UK signed a Data Access Agreement on 3 October 2019, providing domestic law comfort zones for service providers to respond to data access demands from authorities located in the other country. No announcement has yet been made that Agreement has entered into operation. It came into force on 3 October 2022.

The Second Additional Protocol to the Convention on Cybercrime on enhanced co-operation and disclosure of electronic evidence is was open for signature from 12 May 2022 and presented to the UK Parliament in July 2022.

State communications surveillance The kaleidoscopic mosaic of cases capable of affecting the UK’s 
Investigatory Powers Act 2016 (IP Act) continues to reshape itself. In this field CJEU judgments will continue to be relevant in principle, since they form the backdrop to future reviews of the European Commission’s June 2021 UK data protection adequacy decision.

Domestically, Liberty has a pending judicial review of the IP Act bulk powers and data retention powers. Some EU law aspects (including bulk powers) were stayed pending the Privacy International reference to the CJEU. Those aspects are now proceeding and, according to Liberty, are likely to be in court in early 2022. The Divisional Court rejected the claim that the IP Act data retention powers provide for the general and indiscriminate retention of traffic and location data, contrary to EU law. That point may in due course come before the Court of Appeal. The Divisional Court gave judgment on the stayed aspects on 24 June 2022. Liberty's claims were rejected except for one aspect concerning the need for prior independent authorisation for access to some retained data. 

Investigatory Powers Act review The second half of 2022 will see the Secretary of State preparing the report on the operation of the IP Act required under Section 260 of the Act.

Electronic transactions The pandemic focused attention on legal obstacles to transacting electronically and remotely. Whilst uncommon in commercial transactions, some impediments do exist and, in a few cases, were temporarily relaxed. That may pave the way for permanent changes in due course.

Although the question typically asked is whether electronic signatures can be used, the most significant obstacles tend to be presented by surrounding formalities rather than signature requirements themselves. A case in point is the physical presence requirement for witnessing deeds, which stands in the way of remote witnessing by video or screen-sharing. The Law Commission Report on Electronic Execution of Documents recommended that the government should set up an Industry Working Group to look at that and other issues. The Working Group has now been formed. It issued an Interim Report on 1 February 2022.

[Updated 29 April 2022 and 2 November 2022.]