Sunday, 13 July 2025

The Ordinary Reasonable Person encounters (or not) cyber-abuse

The recent decision of the Australian Administrative Review Tribunal in X Corp and Elston v eSafety Commissioner illustrates the complexities that can arise when the law tasks a regulator or platform to adjudge an online post.

The decision grapples with a dilemma that is familiar, albeit under a very different legislative regime, from the UK’s Online Safety Act 2023. It is also features in the police takedown notice scheme for unlawful knives and other weapons content contained in the Crime and Policing Bill (currently making its way through Parliament).

At a high level, the issue is how to achieve rapid removal of impugned user content (typically because it is illegal under the general law or defined as harmful in some way), while not affecting legitimate posts. The specific challenge is that the contents of the post alone are often insufficient to determine whether the legal line has been crossed. Contextual information, which may be off-platform and involve investigation, is required. The Elston case provides a vivid illustration.

The twin imperatives of rapid removal and adequate investigation of context stand in conflict with each other. A regime that requires contravention to be adjudged solely on the contents of a post, ignoring external context, is likely to be either ineffectual or overreaching, depending on which way the adjudicator is required to jump in the absence of relevant information.

Australia’s Online Safety Act 2021 empowers the eSafety Commissioner, but only following receipt of a complaint, to issue a content removal notice to a social media platform if she is satisfied that a user’s post constitutes cyber-abuse material targeted at an Australian adult. (In this respect the Australian legislation resembles the UK Crime and Policing Bill more than our Online Safety Act: Ofcom has no power under the OSA to require removal of a specific item of user content. The Crime and Policing Bill will institute a regime of police takedown notices for unlawful knives and other weapons content, albeit not predicated on receipt of a complaint.)

Cyber-abuse material under the Australian Act has two key elements. The eSafety Commissioner has to be satisfied of both before issuing a removal notice:

Intention Element: an ordinary reasonable person would conclude that it is likely that the material was intended to have an effect of causing serious harm to a particular Australian adult.

Offense Element: an ordinary reasonable person in the position of the Australian adult would regard the material as being, in all the circumstances, menacing, harassing or offensive.

Serious harm is defined as serious physical harm or serious harm to a person’s mental health, whether temporary or permanent. Serious harm to a person’s mental health includes:

(a) serious psychological harm; and

(b) serious distress;

but does not include mere ordinary emotional reactions such as those of only distress, grief, fear or anger.

The need to assess what an ‘ordinary reasonable person’ would think is common to both elements. For the Intention Element the Ordinary Reasonable Person has to determine the likely intention of the person who posted the material. For the Offense Element, in order to determine how the material should be regarded, the Ordinary Reasonable Person has to be put in the position of the Australian adult putatively intended to be targeted.

The reason why the legislation hypothesises an Ordinary Reasonable Person is to inject some objectivity into what could otherwise be an overly subjective test.

The Tribunal observed that the Intention Element converted what would otherwise be “a broadly available censorship tool based on emotional responses to posted material” into a provision that “protects people from a much narrower form of conduct where causing serious harm to a particular person was, in the relevant sense, intended” [21]. (This has similarities to the heavy lifting done by the mental element in broadly drafted terrorism offences.)

We are in familiar legal territory with fictive characters such as the Ordinary Reasonable Person. It is reminiscent of the fleeting appearance of the Person of Ordinary Sensibilities in the draft UK Online Safety Bill.

Nevertheless, as the Tribunal decision illustrates, the attributes of the hypothetical person may need further elucidation. Those characteristics can materially affect the balance between freedom of expression and the protective elements of the legislation in question.

Thus, what is the Ordinary Reasonable Person taken generally to know?  What information can the Ordinary Reasonable Person look at in deciding whether intention to cause serious harm is likely? How likely is likely?

The information available to the Ordinary Reasonable Person

The question of what information can, or should, be taken into account is especially pertinent to legislation that requires moderation decisions to be made that will impinge on freedom of expression. The Tribunal posed the question thus:

“… whether findings on the Intention Element should be made on an impressionistic basis after considering a limited range of material, or whether findings should be made after careful consideration, having regard to any evidence obtained as part of any investigation or review process.” [45]

It found that:

“The history and structure of the provisions suggest that while impressionistic decision-making may be authorised in the first instance, early decisions made on limited information can and should be re-visited both internally and externally as more information becomes available, including as a result of input from the affected end-user.” [45]

That was against the background that:

“…the legislation as passed allows for rapid decision making by the Commissioner to deal with material that appears, on its face, to be within a category that the Act specified could be the subject of a removal notice. However, once action has been taken, the insertion of s 220A confirms that Parliament accepted that there needed to be an opportunity for those affected by the action to have an opportunity to address whether the material was actually within the prohibited statutory category. External review by the Tribunal was provided for with the same end in mind.” [44]

The UK Online Safety Act states that a platform making an illegality judgement should do so on the basis of all relevant information reasonably available to it. Ofcom guidance fleshes out what information is to be regarded as reasonably available.

The UK Crime and Policing Bill says nothing about what information a police officer giving an unlawful weapons content removal notice, or a senior officer reviewing such a notice, should seek out and take into account. Nor does it provide any opportunity for the user whose content is condemned to make representations, or to be notified of the decision.

Generally speaking, the less information that can or should be taken into account, the greater the likelihood of arbitrary decision-making and consequent violation of freedom of expression rights.

In the Elston case three different variations on the Ordinary Reasonable Person were put to the Tribunal. The eSafety Commissioner argued that the Ordinary Reasonable Person should be limited to considering the poster’s profile on X and the material constituting the post. The poster’s subsequent evidence about his intention and motivations was irrelevant to determining whether the Intention Element was satisfied. The same was said to apply to evidence about the poster’s knowledge of the Australian person said to be targeted. (The Tribunal observed that that would mean that even material contained in the complaint that preceded the removal notice would be excluded from consideration.)

As to the general knowledge of the Ordinary Reasonable Person, the eSafety Commissioner argued that (for the purposes of the case before the Tribunal, which concerned a post linking to and commenting on a newspaper article about a transgender person) the Ordinary Reasonable Person would be aware that material on X can bully individuals, would understand that public discourse around sexuality and gender can be polarising as well as emotionally charged; and would understand that calling a transgender man a woman would be to act contrary to that transgender man’s wishes.

X Corp argued that the decisionmaker was entitled to have regard to evidence (including later evidence) concerning immediate context as at the time of the post, but not more. The facts which could be known to the ordinary reasonable person when making their assessment included facts about the subject of the post or the poster, what their relationship was at the time of the post, but not evidence about what happened after.

The significance of the different positions was that on X Corp’s case, later evidence could be taken into account to the effect that the poster did not know, or know of, the person who was the subject of the post until he read the newspaper article. That was not apparent from the post itself or the poster’s profile.

Mr Elston (the poster) argued that a wide range of material could be acquired and treated as available to the ordinary reasonable person when asked to decide whether the material posted ‘was intended to have an effect of causing serious harm’.

On this view of the statutory power, evidence obtained before or after the post, during the course of the investigation and concerning matters that occurred after the post was made, could be treated as available to the Ordinary Reasonable Person when considering the Intention Element.

On this approach, Mr Elston’s own evidence about his intention would be “relevant to consider, but not necessarily conclusive of what an ordinary reasonable person would conclude about his intention.” [62]

The Tribunal agreed with Mr Elston’s approach:

“The existence of the investigative powers available to the Commissioner and the complaint-based nature of the power provide a powerful basis for concluding that the Commissioner and the Tribunal should be feeding all of the available evidence into the assessment of what the ‘ordinary reasonable person’ would conclude was likely before determining whether the Intention Element is satisfied.” [74]

It added:

“The Parliament was concerned to give end-users an opportunity to address claims about their conduct both on internal review and by providing review in the Tribunal. To read the ordinary reasonable person lens as a basis for disregarding evidence submitted by either the complainant or the end-user or discovered by the Commissioner during an investigation is not consistent with the fair, high quality decision-making the Parliament made provision for.” [77]

The Tribunal then spelled out the consequences of the Commissioner’s approach:

“…In many circumstances, including this case, limiting the information that can be considered by the ‘ordinary reasonable person’ to the post and closely related material, results in critical information not being available.” [81]

It went on:

“In this case, there is no evidence in any of the material posted and associated with the post, that the post was ever brought to the attention of Mr Cook [the complainant]. …

That Mr Cook was aware of the post is only discoverable by reference to the complaint submitted to the Commissioner. If a decision maker is restricted to knowing that a post was made to a limited audience, none of whom included Mr Cook, reaching the conclusion that the material was intended to cause serious harm to Mr Cook is going to be difficult. In those circumstances, where there appears to be no evidence to which the decision maker can have regard in order to make a finding that the post came to Mr Cook’s attention, let alone was intended to come to his attention, a decision to issue a removal notice could not be sustained.” [81]

The Tribunal reiterated:

“In many cases, it will be the complaint that provides critical context to allow an ordinary reasonable person to conclude that serious harm was intended.” [81]

The Tribunal concluded that evidence about what happened after the post was posted could be relevant if it shed light on the likely intention of the poster. Similarly, evidence about prior behaviour of third parties in response to certain posts could be relevant, even if it was only discoverable by the regulator using compulsory powers:

“So long as evidence sheds light on the statutory question, then it can and should be considered. It would be inappropriate in advance of a particular factual scenario being presented to the decision-maker to say that there are whole categories of evidence that cannot be considered because the statutory test in all circumstances renders the material irrelevant.” [87]

Nevertheless, that did not mean that the concept of the ‘ordinary and reasonable’ person had no effect:

“It moves the assessment away from a specific factual inquiry concerning the actual thought process of the poster and what effect they intended to achieve by the post. I must undertake a more abstract inquiry about what an independent person (who isn’t me) would think was the poster’s intention having regard to the available evidence. Provided evidence is relevant to that question, then it can and should be considered.” [89]

Whilst specific to the Australian statute and its fictive Ordinary Reasonable Person, this discussion neatly illustrates the point that has repeatedly been made (and often ignored): that platform judgements as to illegality required by the UK Online Safety Act will very often require off-platform contextual information and cannot sensibly be made on the basis of a bare user post and profile.

The point assumes greater significance with real-time proactive automated content moderation – something that Ofcom is proposing to extend – which by its very nature is unlikely to have access to off-platform contextual information.

The discussion also speaks eloquently to the silence of the Crime and Policing Bill on what kind and depth of investigation a police officer should conduct in order to be satisfied as to the presence of unlawful weapons content.

Likelihood of serious harm

The other significant point that the Tribunal had to consider was what the statute meant by ‘likely’ that serious harm was intended. The rival contentions were ‘real chance’ and ‘more probable than not’. The Tribunal held that, in the statutory context, the latter was right. The conclusion is notable for acknowledging the adverse consequences for freedom of expression of adopting a lower standard:

“A finding by the ordinary reasonable person that a person was setting out to cause serious harm to another is a serious, adverse finding with implications for freedom of expression. It is not the kind of finding that should be made when it is only possible that serious harm was intended.” [119]

The standard set by the UK Online Safety Act for making content illegality judgements is “reasonable grounds to infer”. It remains questionable, to say the least, whether that standard is compatible with ECHR Article 10. The Crime and Policing Bill says no more than that the police officer must be ‘satisfied’ that the material is unlawful weapons content.  

The Tribunal’s conclusion

On the facts of the case, the Tribunal concluded that an ordinary reasonable person in the position of the complainant Mr Cook would regard the post as offensive; but that the Intention Element was not satisfied. That depended crucially on the broader contextual evidence:

“Read in isolation, the post looks to be an attempt to wound Mr Cook and upset him and cause him distress, perhaps even serious distress. If an ordinary reasonable person was only aware of the post, then it may be open to find that the poster’s intention was likely to be to cause serious harm to Mr Cook. However, when the broader context is known and understood, it is difficult to read the post as intended to harm Mr Cook, or intended to have others direct criticism towards Mr Cook or designed to facilitate vitriol by spreading personal information about him.” [191]

Amongst the broader context was lack of evidence that the poster intended the post to come to Mr Cook’s attention.

“For the post to do any harm it needed to be read by Mr Cook. While I am satisfied that Mr Elston was indifferent to whether the post did come to Mr Cook’s attention and indifferent to whether or not it distressed him, there is no evidence to support the conclusion that the post was made with the intention of it being brought to Mr Cook’s attention.” [197]

Part of the reasoning behind that conclusion was that Mr Elston’s post did not tag Mr Cook’s user handle, but only that of the World Health Organisation (which had appointed Mr Cook to an advisory panel):

“ It is notable that Mr Elston only included the handle for the WHO in his post and there is nothing in the body of the post that attempts to facilitate the contacting of Mr Cook by Mr Elston’s followers. Mr Cook’s name is not used in the body of the post.” [200]

Overall, the Tribunal concluded:

“When the evidence is considered as a whole I am not satisfied that an ordinary reasonable person would conclude that by making the post Mr Elston intended to cause Mr Cook serious harm. In the absence of any evidence that Mr Elston intended that Mr Cook would receive and read the post, and in light of the broader explanation as to why Mr Elston made the post, I am satisfied that an ordinary reasonable person would not conclude that that it is likely that the post was intended to have an effect of causing serious harm to Mr Cook.” [207]

For present purposes the actual result in the Elston case matters less than the illustration that it provides of what can be involved in making judgements about removal or blocking of posts against a statutory test: whether that evaluation be done by a regulator, a platform discharging a duty imposed by statute or (in the likely future case of unlawful weapons content) the police.


Tuesday, 6 May 2025

Knives out for knives

As part of a broader campaign targeting knife crime the Home Office has published its consultation response on a new procedure for authorised police officers to issue takedown notices to online platforms (also now to include search engines). These would require 48-hour removal of specified illegal weapons content items, on pain of civil penalty sanctions.

The government has also tabled implementing amendments to the Criminal Law and Policing Bill. These merit close attention. A takedown regime of this kind inevitably faces some similar issues to those that confronted the Online Safety Act, particularly in how to go about distinguishing illegal from legal content online. The Online Safety Act eventually included some fairly tortuous provisions that attempt (whether successfully or not) to meet those challenges. In contrast, the Policing Bill amendments maintain a judicious silence on some of the thorniest issues.

Parenthetically, as a policy matter the idea of a system for giving authoritative illegal content removal notices to platforms is not necessarily a bad one — so long as the decision to issue a notice is independent and accompanied by robust prior due process safeguards.  Previously, back in 2019, I suggested a system of specialist independent tribunals that could be empowered to issue such notices to platforms, as (along with other measures) a preferable alternative to a ‘regulation by discretionary regulator’ scheme. That idea went nowhere.

But back to the Bill amendments. The most critical aspects of an official content removal notice regime are how illegality is to be determined, independence of the notice-giver, prior due process and safeguards. How do the government’s proposals measure up?

What is unlawful weapons content?

As the Online Safety Act has reminded us, the notion of illegal content is not as simple a concept as might be thought; nor is making determinations of illegality.

First off, there is the conceptual problem. Online content as such cannot be illegal: persons, not content, commit offences. It is only what someone does with, or by means of, content that can be illegal.

Of course, in everyday parlance we say that zombie knives are illegal, or that extreme pornography is illegal, and we know what we mean. Statutory drafting has to be more rigorous: it has to reflect the fact that the offence is constituted by what is done with the item or the content, with what intent, and subject to any available defences. It is legally incoherent to say that content constitutes an offence, without seeking to bridge that gap.

The Online Safety Act attempted to grapple with the conceptual difficulty of equating content with an offence. The Policing Bill amendments do not.

For England and Wales new clause NC79 in the Bill amendments asserts that content is “unlawful weapons content” if it is:

“content that constitutes…  an offence under section 1(1) of the Restriction of Offensive Weapons Act 1959 (offering to sell, hire, loan or give away etc a dangerous weapon)”

NC79 provides the same for offences under section 1 or 2 of the Knives Act 1997 (marketing of knives as suitable for combat etc and related publications), and under section 141(1) of the Criminal Justice Act 1988 (offering to sell, hire, loan or give away etc an offensive weapon).

That is all. The Online Safety Act (Section 59(2)) does kick off in a similar way, by stipulating that:

“ “Illegal content” means content that amounts to a relevant offence.”

But (unlike the Policing Bill amendments) section 59(3) goes on to try to bridge the gap between content and conduct:

“Content consisting of certain words, images, speech or sounds amounts to a relevant offence if—

(a) the use of the words, images, speech or sounds amounts to a relevant offence,

(b) the possession, viewing or accessing of the content constitutes a relevant offence, or

(c) the publication or dissemination of the content constitutes a relevant offence.”

The Bill amendments contain no equivalent clause.

Determining illegality

Even if the conceptual gap were to be bridged by a similar amendment clause, that does not mean that illegality is necessarily obvious just by looking at the online content. Each offence has its own conduct elements, mental element and any defences that the legislation may stipulate. Ofcom’s Illegal Content Judgements Guidance under the Online Safety Act devotes three pages to section 1(1) of the Restriction of Offensive Weapons Act 1959 alone.

Two issues arise with determining illegality: what information does the authorised police officer need to have in order to be able to make a determination? How sure does the officer have to be that an offence has been committed?

The Online Safety Act, recognising that illegality may have to be considered in a broader context than the online content alone, stipulates that a service provider’s determination of illegality has to be made in the light of all relevant information that is reasonably available to the service provider.

That has some parallels with the duties of investigating police officers under the Criminal Procedure and Investigations Act 1996: that all reasonable steps are taken for the purposes of the investigation and, in particular, that all reasonable lines of inquiry are pursued.

The 1996 Act duty applies to a police investigation conducted with a view to ascertaining whether a person should be charged with an offence, or whether a person charged with an offence is guilty of it. However, ascertaining whether an offence has been committed for the purpose of a content removal notice is not the same as doing so with a view to making a charging decision. In order to issue a content removal notice the officer would not need to identify who had committed the offence – only determine that someone had done so.

Assuming, therefore, that the 1996 Act duty would not apply if a police officer were considering only whether to issue a content removal notice, how far would the police have to go in gathering relevant information before deciding whether an offence had been committed?

There will of course be cases, perhaps even most cases, in which the illegality may be obvious – for instance from the kind of knife involved and what has been said online – and the possibility of a defence remote. But it will not necessarily always be simple, or even possible, to make an illegality determination simply by looking at the online content alone.

The Online Safety Act (and Ofcom’s guidance on making illegality judgements) attempts to indicate what information the service provider should consider in making judgements about illegality. The Bill amendments are silent on this.

Indeed, the Ofcom Online Safety Act guidance (which regards law enforcement as a potential ‘trusted flagger’ for this kind of offence) anticipates that the flagger may provide contextual information: “Reasonably available information for providers of user-to-user and search services” is:

• The content suspected to be illegal content.

• Supporting information provided by any complainant, including that which is provided by any person the provider considers to be a trusted flagger.

The silence of the Bill amendments on this topic is all the more eloquent when we consider that nowhere in the procedures – from content removal notice through to appeal against a civil penalty notice – is there any provision for the person whose content is to be removed to be notified or given the opportunity to make representations.

Comparison with the Online Safety Act

The government emphasises, in its Consultation Response para 6.7, that:

“The proposed measure sits alongside, and does not conflict with, the structures established through the Online Safety Act 2023.”

Strictly speaking that is right: a notice from a police officer under the Bill amendments could have three separate functions or effects:

-        Constitute a notice requiring 48-hour takedown under the new provisions.

-        Fix the service provider with awareness of illegality for the purpose of the OSA reactive duty under S.10(3)(b).

-        Fix the service provider with knowledge of illegality for the purpose of the hosting liability shield derived from the eCommerce Directive. 

Since these are three separate, parallel structures, it is correct that they do not conflict[1]. Nevertheless, they are significantly different from each other. As well as the differences from the Online Safety Act already outlined, the role of law enforcement under the Bill amendments is significantly different.

In particular, although under the Online Safety Act law enforcement may be considered to be a trusted flagger, Ofcom cautions that:

“A provider is not required to accept the opinions of a third party as to whether content is illegal content. Only a judgment of a UK court is binding on it in making this determination. In all other cases, it will need to take its own view on the evidence, information and any opinions provided.”

Therein lies the biggest difference between the Online Safety Act and the Bill amendments. Under the Bill amendments, subject to the review procedure outlined below, a service provider is required to act on the opinion of the police.

The government plans that the content removal system will be operated by a new policing unit, which will be responsible for issuing removal notices. That is presumably reflected (in part) by the Bill amendment provision that a content removal notice has to be given by an officer authorised by the Director General of the National Crime Agency or the chief officer of the relevant police force.

How sure that an offence has been committed?

A related aspect of determining illegality is how sure the person making the decision has to be that the content is illegal.  The Online Safety Act stipulates that the provider has to treat the content as illegal if it has ‘reasonable grounds to infer’ that the content is illegal. ‘Reasonable grounds to infer’ is a relatively low threshold, which has given rise to concerns that legitimate content will inevitably be removed with consequent risk of European Convention on Human Rights incompatibility.

The Bill amendments take a different approach: the police officer making the decision must be ‘satisfied’ that the content is unlawful weapons content. ‘Satisfied’ presumably is not intended to be a wholly subjective assessment. But if not, what degree of confidence is implicit in ‘satisfied’? If the police officer has residual doubts, or has insufficient information to make up his or her mind, could the officer be ‘satisfied’ that the content amounts to an offence? Equally, it probably does not mean ‘satisfied beyond all reasonable doubt’.

The Online Safety Act provides that a service provider does not have to take into account the possibility of a defence unless it has reasonable grounds to infer that a defence may be successfully relied upon. By contrast, under the Bill amendments it seems likely that the police officer would always have to be satisfied that no defence was available.

Safeguards

The government has sought to address the risk of ill-founded notices by means of a review mechanism. The content removal notice has to explain the police officer’s reasons for considering that the content is unlawful weapons content.  The service provider can request review of a notice by a more senior officer. The reviewing officer must then give a decision notice, setting out the outcome of the review and giving reasons. The government has said that it:

“…believes that the review process designed within the proposal adequately addresses online companies concerns with cases where it would be difficult to determine the illegality of content.” (Consultation Response, [6.8])

The review process, however, sheds no light on how much contextual information gathering by police officers is contemplated, nor on the degree of confidence implicit in being ‘satisfied’. It contains no element of independent third party review, nor any opportunity for the person whose content is to be removed to make representations.

That said, the procedure could perhaps be fleshed out by guidance to law enforcement that the Secretary of State may (but is not required to) issue under NC84.

Underlying all these considerations is the matter of ECHR compatibility. The lower or more subjective the threshold for issuing a notice, the less the predictability of the process or outcome, and the fewer or weaker the safeguards against arbitrary or erroneous decision-making, then the greater the likelihood of ECHR incompatibility.

It might be said against all of this that of course the police would only issue a content removal notice if was obvious from the online content itself that an offence was being committed. If that were the intention, might it be preferable to make that explicit and write a “manifest illegality” standard into the legislation?

Does it matter?

It could well be questioned why any of this matters. Who really cares if a few less knives appear online because content is wrongly taken down? That kind of argument is depressingly easy to make where impingements on freedom of expression are concerned. Thus in a different context, what does it really matter if, in our quest to root out the evils in society, we sacrifice due process and foreseeability to flexibility and remove a few too many tasteless jokes, insulting tweets, offensive posts, shocking comments, wounding parodies, disrespectful jibes about religion or anything else that thrives in the toxic online hinterland of the nearly illegal?

Opinions on that will differ. For me, it matters because the rule of law matters. Due process provides the opportunity to be heard. It matters that you should be able to predict in advance, with reasonable certainty, whether something that you are contemplating posting online is liable to be taken down as the result of official action (or, for that matter, the action of a platform seeking to comply with a legal or regulatory duty).

If you cannot do that, you are at the mercy of arbitrary exercise of state power. It is knives today, but who knows what tomorrow (we can, however, be sure that once one 48-hour takedown regime is enacted others will follow).  Abandon the rule of law to ad hoc power and, as Robert Bolt had Sir Thomas More declaim to William Roper in A Man For All Seasons:

“…do you really think you could stand upright in the winds that would blow then? Yes, I'd give the Devil benefit of law, for my own safety's sake!”.

However skilled a dedicated police unit may be, expertise is no substitute for due process, safeguards and independent adjudication. Otherwise, why would we bother with courts at all? The fact that content, rather than a person, is condemned is not, I would suggest, a good reason to skimp on rule of law principles.

It may be said that the Bill amendments provide for recourse to the courts. They do, but only once matters have got as far as a civil penalty notice imposing a fine for non-compliance; and they concern only the platform, not the person who posted the content. That is not the same as due process, safeguards or independent review at the outset of the decision-making process.

Extraterritoriality

To finish with a more technical matter: extraterritoriality. The Online Safety Act, although fairly aggressive in its assertion of jurisdiction, did recognise the need to establish some connection with the UK in order for a U2U or search service to fall within its territorial scope. Thus Section 4 of the OSA sets out a series of criteria to determine whether a service is UK-linked.

The Bill amendments contain no such provision. On the face of it a police officer could serve notices under the Act by email and (in the event of non-compliance) impose civil penalties on any service provider anywhere in the world, regardless of whether they have any connection with the UK at all. If that is what is intended, it would be an extraordinary piece of jurisdictional overreach.

That would also (presumably) bring into play delicate judgements by authorised police officers, when considering whether to serve a content removal notice, as to whether an activity on a platform that had no connection with the UK amounted to an offence within the UK. That is a matter of the territorial scope of the underlying UK offence. The Online Safety Act circumvents questions of that kind by, for the purpose of service provider duties, instructing the service provider to disregard territorial considerations:

“For the purposes of determining whether content amounts to an offence, no account is to be taken of whether or not anything done in relation to the content takes place in any part of the United Kingdom.”

The Bill amendments are silent on these difficult jurisdictional issues.


[1] This is on the basis that the notice regime would fall within the eCommerce Directive exception for specific court or administrative authority orders to terminate an infringement. That would depend on whether the police are properly regarded as an administrative authority.  If not, it could be argued that the Policing Bill amendments in substance are inconsistent with the eCommerce Directive hosting liability shield to which, as a matter of policy, the government ostensibly continues to adhere: "The government is committed to upholding the liability protections now that the transition period has ended." (The eCommerce Directive and the UK, last updated 18 January 2021).



Tuesday, 15 April 2025

The computer is always right - or is it?

This is my submission to the Ministry of Justice Call for Evidence on computer evidence in criminal proceedings. 

Some takeaways from 24 pages of rather dense legal analysis:

  • The evidential presumption of reliability (properly so called) is a different animal from informal assumptions about the reliability of computers. They are related, and the latter may influence attitudes to e.g. the threshold for disclosure applications, or to the basic understanding of what the prosecution has to do to prove a case ‘beyond reasonable doubt’ (as opposed to theoretical or fanciful doubt). But the evidential presumption is legally distinct and has a specific, limited, function. The review should cast its net wider than the evidential presumption properly so-called, but equally not confuse it with informal assumptions.
  • There is evidently a perception (possibly fostered by the Law Commission 1995 and 1997 recommendations) that the presumption of reliability applies automatically to all computing devices. I don’t think the caselaw supports that. As I read the cases, the court can decide whether or not to apply the presumption.
  • If the prosecution deploys expert evidence on reliability the presumption is irrelevant: it’s a matter of deciding between experts. So for general purpose computers and software (i.e. excluding breathalysers, speed guns etc, where the presumption is routinely relied upon) how often does the prosecution actually rely on (and then the court apply) the presumption? This is a question for criminal practitioners (which I am not).  I have found no reported criminal cases (other than where accuracy was not questioned), but will readily stand to be corrected if there are any.
  • Reliability of computer evidence always has to be considered in the context of what specifically is sought to be proved by it (which can vary widely). For instance, to make a general point, adducing computer records to evidence presence of a transaction is different from proving absence. The latter requires the computer records to be not just accurate, but complete.
  • It should not be forgotten that a defendant may wish to adduce computer evidence (e.g. a video taken by them on their mobile phone).
  • There may be a distinction to be made between the output of general purpose computing systems and dedicated forensic tools.
  • It may also be pertinent to consider whether the computer evidence sought to be relied upon is central to the prosecution’s case, and whether it is corroborated or uncorroborated.
  • A computer evidence regime that is predicated on whether a document is ‘produced by a computer’ is potentially problematic, for two reasons. First, there is hardly a document now that has not been touched by a computer at some point in its history. Are those all ‘produced by a computer’? Second (as presaged in the caselaw on S.69 PACE 1984), a bright line definition of that kind is liable to give rise to satellite disputes about what does and does not fall on either side of the line. It may be more fruitful to view matters through the lens of a regime for documentary evidence generally.
  • The proposed distinction between generated and captured or recorded evidence is difficult to apply conceptually. The practical examples given in the Call for Evidence throw up many questions.
  • Any proposed computer evidence regime should be tested against concrete hypotheticals.  I have suggested a list of fourteen, drawn from reported cases.




[Updated 16 April 2025 with a list of takeaways]



Tuesday, 11 February 2025

The Online Safety Act grumbles on

Policymakers sometimes comfort themselves that if no-one is completely satisfied, they have probably got it about right. 

On that basis, Ofcom’s implementation of the Online Safety Act’s illegality duties must be near-perfection: the Secretary of State (DSIT) administering a sharp nudge with his draft Statement of Strategic Priorities, while simultaneously under fire for accepting Ofcom’s advice on categorisation of services; volunteer-led community forums threatening to close down in the face of perceived compliance burdens; and many of the Act’s cheerleaders complaining that Ofcom’s implementation has so far served up less substantial fare than they envisaged. 

As of now, an estimated 25,000 UK user-to-user and search providers (plus another 75,000 around the world) are meant to be busily engaged in getting their Illegal Harms risk assessments finished by 16 March. 

Today is Safer Internet Day. So perhaps spare a thought for those who are getting to grips with core and enhanced inputs, puzzling over what amounts to a ‘significant’ number of users, learning that a few risk factors may constitute ‘many’ (footnote 74 to Ofcom’s General Risk Level Table), or wondering whether their service can be ‘low risk’ if they allow users to post hyperlinks.  (Ofcom has determined that hyperlinks are a risk factor for six of the 17 kinds of priority offence designated by the Act: terrorism, CSEA, fraud and financial services, drugs and psychoactive substances, encouraging or assisting suicide and foreign interference offences). 

Grumbles from whichever quarter will come as no great surprise to those (this author included) who have argued from the start that the legislation is an ill-conceived, unworkable mess which was always destined to end in tears. Even so, and making due allowance for the well-nigh impossible task with which Ofcom has been landed, there is an abiding impression that Ofcom’s efforts to flesh out the service provider duties - risk assessment in particular – could have been made easier to understand. 

The original illegal harms consultation drew flak for its sheer bulk: a tad over 1,700 pages. The final round of illegal harms documents is even weightier: over 2,400 pages in all. It is in two parts. The first is a Statement. In accordance with Ofcom’s standing consultation principles, it aims to explain what Ofcom is going to do and why, showing how respondents’ views helped to shape Ofcom’s decisions. That amounts to 1,175 pages, including two summaries. 

The remaining 1,248 pages consist of statutory documents: those that the Act itself requires Ofcom to produce. These are a Register of Risks, Risk Assessment Guidance, Risk Profiles, Record Keeping and Review Guidance, a User to User Illegal Content Code of Practice, a Search Service Illegal Content Code of Practice, Illegal Content Judgements Guidance, Enforcement Guidance, and Guidance on Content Communicated Publicly and Privately. Drafts of the two Codes of Practice were laid before Parliament on 16 December 2024. Ofcom can issue them in final form upon completion of that procedure.

When it comes to ease of understanding, it is tempting to go on at length about the terminological tangles to be found in the documents, particularly around ‘harm’, ‘illegal harm’ and ‘kinds of illegal harm’. But really, what more is worth saying? Ofcom’s documents are, to all intents and purposes, set in stone. Does it help anyone to pen another few thousand words bemoaning opaque language? Other than in giving comfort that they are not alone to those struggling to understand the documents, probably not. Everyone has to get on and make the best of it.

So one illustration will have to suffice. ‘Illegal harm’ is not a term defined or used in the Act. In the original consultation documents Ofcom’s use of ‘illegal harm’ veered back and forth between the underlying offence, the harm caused by an offence, and a general catch-all for the illegality duties; often leaving the reader to guess in which sense it was being used. 

The final documents are improved in some places, but introduce new conundrums in others. One of the most striking examples is paragraph 2.35 and Table 6 of the Risk Assessment Guidance (emphasis added to all quotations below). 

Paragraph 2.35 says: 

“When evaluating the likelihood of a kind of illegal content occurring on your service and the chance of your service being used to commit or facilitate an offence, you should ask yourself the questions set out in Table 6.”

Table 6 is headed: 

“What to consider when assessing the likelihood of illegal content

The table then switches from ‘illegal content’ to ‘illegal harm’. The first suggested question in the table is whether risk factors indicate that: 

“this kind of illegal harm is likely to occur on your service?” 

‘Illegal harm’ is footnoted with a reference to a definition in the Introduction: 

“the physical or psychological harm which can occur from a user encountering any kind of illegal content…”. 

So what is the reader supposed to be evaluating: the likelihood of occurrence of illegal content, or the likelihood of physical or psychological harm arising from such content? 

If ‘Illegal Harm’ had been nothing more than a title that Ofcom gave to its illegality workstream, then what the term actually meant might not have mattered very much. But the various duties that the Act places on service providers, and even Ofcom’s own duties, rest on carefully crafted distinctions between illegal content, underlying criminal offences and harm (meaning physical or psychological harm) arising from such illegality. 

That can be seen in this visualisation. It illustrates the U2U service provider illegality duties - both risk assessment and substantive - together with the Ofcom duty to prepare an illegality Risks Register and Risk Profiles.  The visualisation divides the duties into four zones (A, B, C and D), explained below. 

A: The duties in this zone require U2U providers to assess certain risks related to illegal content (priority and non-priority). These risks are independent of and unrelated to harm. The risks to be assessed have no direct counterpart in any of the substantive safety duties in Section 10. Their relevance to those safety duties probably lies in the proportionality assessment of measures to fulfil the Section 10 duties. 

Although the service provider’s risk assessment has to take account of the Ofcom Risk Profile that relates to its particular kind of service, Ofcom’s Risk Profiles are narrower in scope than the service provider risk assessment. Under the Act Ofcom’s Risks Register and Risk Profiles are limited to the risk of harm (meaning physical or psychological harm) to individuals in the UK presented by illegal content present on U2U services and by the use of such services for the commission or facilitation of priority offences. 

B:  This zone contains harm-related duties (identified in yellow): Ofcom Risk Profiles, several service provider risk assessment duties framed by reference to harm, plus the one substantive Section 10 duty framed by reference to harm (fed by the results of the harm-related risk assessment duties). Harm has its standard meaning in the Act: physical or psychological harm. 

C: This zone contains two service provider risk assessment duties which are independent of and unrelated to risk of harm, but which feed directly into a corresponding substantive Section 10 duty. 

D: This zone contains the substantive Section 10 duties: one based on harm and three which stand alone. Those three are not directly coupled to the service provider’s risk assessment.

This web of duties is undeniably complex. One can sympathise with the challenge of rendering it into a practical and readily understandable risk assessment process capable of feeding the substantive duties.  Nevertheless, a plainer and more consistently applied approach to terminology in Ofcom's documents would have paid dividends.



Wednesday, 4 December 2024

Safe speech by design

Proponents of a duty of care for online platforms have long dwelt on the theme of safety by design. It has come to the fore again recently with the government’s publication of a draft Statement of Strategic Priorities (SSP) for Ofcom under the Online Safety Act.  Safety by Design is named as one of five key areas.

Ofcom is required to have regard to the final version of the SSP in carrying out its functions under the Act. Given Ofcom’s regulatory independence the government can go only so far in suggesting how Ofcom should do its job. But, in effect, the SSP gives Ofcom a heavy hint about the various directions in which the government would like it to go.

So what does safety by design really mean? How might it fit in with platform (U2U) and search engine duties under the Online Safety Act (OSA)?

Before delving into this, it is worth emphasising that although formulations of online platform safety by design can range very widely [1] [2], for the purposes of the OSA safety by design has to be viewed through the lens of the specific duties imposed by the Act.

This piece focuses on the Act’s U2U illegality duties. Three of the substantive duties concern design or operation of the service:

  • Preventing users encountering priority illegal content by means of the service (S. 10(2)(a))
  • Mitigating and managing the risk of the service being used for the commission or facilitation of a priority offence (as identified in the most recent illegal content risk assessment of the service) (S. 10(2)(b))
  • Mitigating and managing the risks of physical or psychological harm to individuals (again as identified in the most recent illegal content risk assessment) (S. 10(2)(c))

Two further substantive illegality duties are operational, relating to:

  • Minimising the length of time for which priority illegal content is present on the service (S. 10(3)(a))
  • Swiftly taking down illegal content where the service provider is alerted to or otherwise becomes aware of its presence. (S. 10(3)(b))

S.10(4) of the Act provides examples of the areas of design, operation and use of a service to which the duties apply and, if proportionate, require the service provider to take or use measures. Those include “design of functionalities, algorithms and other features.”

Safety by design in the Online Safety Act

When applied to online speech, the notion of safety by design prompts some immediate questions: What is safety? What is harm?

The OSA is less than helpful about this. It does not define safety, or safety by design. It defines harm as physical or psychological harm, but that term appears in only one of the five substantive illegality duties outlined above. Harm has more a pronounced, but not exclusive, place in the prior illegal content risk assessment that a platform is required to undertake.

Safety by design gained particular prominence with a last-minute House of Lords addition to the Bill: an introductory ‘purpose’ clause. This amendment was the result of cross-party collaboration between the then Conservative government and the Labour Opposition.

What is now Section 1 proclaims (among other things) that the Act provides for a new regulatory framework which has the:

“general purpose of making the use of internet services regulated by this Act safer for individuals in the United Kingdom.”

It goes on that to achieve that purpose, the Act (among other things):

“imposes duties which, in broad terms, require providers of services regulated by this Act to identify, mitigate and manage the risks of harm (including risks which particularly affect individuals with a certain characteristic) from

(i) illegal content and activity, and

(ii) content and activity that is harmful to children, …”

Finally, and most relevantly, it adds that:

“Duties imposed on providers by this Act seek to secure (among other things) that services regulated by this Act are … safe by design…”.

A purpose clause is intended to assist in the interpretation of the legislation by setting out the purposes for Parliament intended to legislate, rather than leaving the courts to infer them from the statutory language.

Whether such clauses in fact tend to help or hinder is a matter of lawyerly debate. This clause is especially confusing in its use of terms that are not defined by the Act and do not have a clear and obvious ordinary meaning (“safe” and “safe by design”), mixed up with terms that are specifically defined in the legislation (“harm”, meaning physical or psychological harm).

One thought might be that “safe” means safe from physical or psychological harm, and that “safe by design” should be understood accordingly. However, that seems unlikely since four of the five substantive illegality duties on service providers relate to illegal content and activity per se, irrespective of whether they might involve a risk of physical or psychological harm to individuals.

S.235 defines Ofcom’s “online safety functions” in terms of all its functions under the Act. In contrast, the transitional provisions for Video Service Providers define “safety duties” in terms focused on platform duties in respect of illegality and harm to children.

Similarly, in the earlier part of the Act only those two sets of duties are described (albeit merely in the section headings) as “safety duties”. “Safe by design” may possibly refer to those duties alone.   

The concept of safety by design tends to embody some or all of a number of elements: risk-creating features; prevention and reduction of harm; achieving those by appropriate design of a risk-creating feature, or by adding technical safeguards.

The most general aspect of safety by design concerns timing: that safety should be designed in from the outset rather than thought about afterwards.

Prevention itself has a temporal aspect, but that may relate as much to the kind of measure as to the stage of development at which it should be considered. Thus the Minister’s introduction to the Statement of Strategic Priorities says that it:

“includes ensuring safety is baked into platforms from the start so more harm is caught before it occurs”.

This could refer to the point at which a safety measure intervenes in the user’s activity, as opposed to (or as well as) the stage at which the designers consider it.

Later in the Statement, safety by design is expressly said to include deploying technology in content moderation processes. Providers would be expected to:

“…embed proportionate safety by design principles to mitigate the [risk of their service being used to facilitate illegal activity]. This should include steps such as … where proportionate, deploying technology to improve the scale and effectiveness of content moderation, considering factors including providers’ capacity and users’ freedom of expression and privacy rights.”

An analogy with product safety could suggest that safety by design is about identifying risk-creating features at the design stage and either designing those features in the safest way or incorporating safeguards. That aspect is emphasised by Professor Lorna Woods in a recent paper [3]:

“The objective of ‘safety by design’ is – like product safety – to reduce the tendency of a given feature or service to create or exacerbate such issues.”

Applied to products like cars that would mean that you should consider at the outset where safely to position the fuel tank, not unthinkingly place it somewhere dangerous and try to remedy the problem down the line, or after an accident has happened. Or, if a piece of machinery has a sharp cutting blade, consider at the outset how to add a guard into the design. A culture of safety by design should help to ensure that potential safety risks are considered and not overlooked.  

However, a focus on risk-creating features gives rise to particular difficulties when safety by design is translated to online platforms.

The underlying duty of care reasons for this have been rehearsed on previous occasions (here and here). In short, speech is not a tripping hazard, nor is it a piece of machinery. A cutting machine that presents a risk of physical injury to its operator is nothing like a space in which independent, sentient human beings can converse with each other and choose what to say and do.

Professor Woods [3] suggests that ‘by design’ seeks to ensure that products respect the law (my emphasis). If that is right, then by the same token it could be said that safety by design when applied to online platforms seeks to ensure that in their communications with each other users respect the law (or boundaries of harm set by the legislation). That is a materially different exercise, for which analogies with product safety can be taken only so far.

The June 2021 DCMS/DSIT paper Principles of safer online platform design opened with the statement that:

“Online harms can happen when features and functions on an online platform create a risk to users’ safety.”

For the illegality duties imposed by the OSA, when we set about identifying concrete features and functionalities that are said to create or increase risk of illegality, we run into problems when we move beyond positive platform conduct such as recommender and curation algorithms.

The example of recommender and curation algorithms has the merit of focusing on a feature that the provider has designed and which can causally affect which user content is provided to other users.

But the OSA duties of care – and thus safety by design - go well beyond algorithmic social media curation, extending to (for instance) platforms that do no more than enable users to post to a plain vanilla discussion forum.

Consider the OSA safety duties concerning priority illegal content and priority offences.  What kind of feature would create or increase a risk of, for example, an online user deciding to offer boat trips across the Channel to aspiring illegal immigrants?

The further we move away from positive content-related functionality, the more difficult it becomes to envisage how safety by design grounded in the notion of specific risk-creating features and functions might map on to real-world technical features of online platforms.

The draft SSP confirms that under the OSA safety by design is intended to be about more than algorithms:

“When we discuss safety by design, we mean that regulated providers should look at all areas of their services and business models, including algorithms and functionalities, when considering how to protect all users online. They should focus not only on managing risks but embedding safety outcomes throughout the design and development of new features and functionalities, and consider how to make existing features safer.”

Ofcom faced the question of risk-creating features when preparing the risk profiles that the Act requires it to provide for different kinds of in-scope service. For the U2U illegality risk profile it has to:

“carry out risk assessments to identify and assess the following risks of harm presented by [user to user] services of different kinds—

(a) the risks of harm to individuals in the United Kingdom presented by illegal content present on regulated user-to-user services and by the use of such services for the commission or facilitation of priority offences; …”

The risks that Ofcom has to identify and assess, it should be noted, are not the bare risk of illegal content or illegal activity, but the risk of harm (meaning physical or psychological harm) to individuals presented by such content or activity.

Ofcom is required to identify characteristics of different kinds of services that are relevant to those risks of harm, and to assess the impact of those kinds of characteristics on such risks. “Characteristics” of a service include its functionalities, user base, business model, governance and other systems and processes.

Although a platform has to carry out its own illegal content risk assessment, taking account of Ofcom’s risk profile, the illegality risks that the platform has to assess also include bare (non-harm-related) illegality.

Ofcom recognises that functionalities are not necessarily risk-creating:

“Functionalities in general are not inherently positive nor negative. They facilitate communication at scale and reduce frictions in user-to-user interactions, making it possible to disseminate both positive and harmful content. For example, users can engage with one another through direct messaging and livestreaming, develop relationships and reduce social isolation. In contrast, functionalities can also enable the sharing of illegal material such as livestreams of terrorist atrocities or messages sent with the intent of grooming children.” [6W.16]

Ofcom overcomes this issue in its proposed risk profiles by going beyond characteristics that of themselves create or increase risks of illegality. This is most clearly expressed in Volume 2 of its Illegal Harms Consultation:

“We recognise that not all characteristics are inherently harmful; we therefore use the term ‘risk factor’ to describe a characteristic for which there is evidence of a risk of harm to individuals. For example, a functionality like livestreaming is not inherently risky but evidence has shown that it can be abused by perpetrators; when considering specific offences such as terrorism or CSEA, a functionality like livestreaming can give rise to risk of harm or the commission or facilitation of an offence.” [5.26]

General purpose functionality and features of online communication can thus be designated as risk factors, on the basis that there is evidence that wrongdoers make use of them or, in some instances, certain combinations of features.

Since measures focused on general purpose features are likely to be vulnerable to objections of disproportionate interference with freedom of expression, for such features the focus of preventing or mitigating the identified risk is more likely to be on other aspects of the platform’s design, on user options and controls in relation to that feature (e.g. an option to disable the feature), or on measures such as content moderation.

Ofcom implicitly recognises this in the context of livestreaming:

“6.11 We acknowledge that some of the risk factors, which the evidence has demonstrated are linked to a particular kind of illegal harm, can also be beneficial to users. This can be in terms of the communication that they facilitate, or in some cases fulfilling other objectives, such as protecting user privacy. …

6.13 While livestreaming can be a risk factor for several kinds of illegal harm as it can allow the real-time sharing of illegal content, it also allows for real-time updates in news, providing crucial information to a wide-range of individuals.

6.14 These considerations are a key part of the analysis underpinning our Codes measures.”

The result is that while the illegality risk profiles that Ofcom has proposed include as risk factors a range of platform features that could be viewed as general purpose, they tend not to translate into recommended measures aimed at inhibiting that feature.

Here is a selection of features included in the proposed illegality risk profile:

Service feature

Risk (likelihood of increased risk of harm related to offences involving):

Ability to create user profiles

Grooming, harassment, stalking, threats, abuse, drugs and psychoactive substances, unlawful immigration, human trafficking, sexual exploitation of adults;

and for the risk of fake profiles:

Grooming, harassment, stalking, threats, abuse, controlling or coercive behaviour, proceeds of crime, fraud and financial services, foreign interference offences.

Users can form user groups

Grooming, encouraging or assisting suicide or serious self-harm, drugs and psychoactive substances, unlawful immigration, human trafficking.

Livestreaming

Terrorism, grooming, image-based CSAM, encouraging or assisting suicide or serious self-harm, harassment, stalking, threats, abuse.

Direct messaging

Grooming and CSAM, hate, harassment, stalking, threats, abuse, controlling or coercive behaviour, intimate image abuse, fraud and financial services offences.

Encrypted messaging

Terrorism, grooming, CSAM, drugs and psychoactive substances, sexual exploitation of adults, foreign interference, fraud and financial services offences.

Ability to comment on content

Terrorism, grooming, encouraging or assisting suicide or serious self-harm, hate, harassment, stalking, threats, abuse.

Ability to post images or videos

Terrorism, image-based CSAM, encouraging or assisting suicide or serious self-harm, controlling or coercive behaviour, drugs and psychoactive substances, extreme pornography, intimate image abuse.

Ability to repost or forward content

Encouraging or assisting suicide or serious self-harm, harassment, stalking, threats, abuse, intimate image abuse, foreign interference. 

Ability to search for user generated content

Drugs and psychoactive substances, firearms and other weapons, extreme pornography, fraud and financial services offences.

Hyperlinks

Terrorism, CSAM URLs, foreign interference offences.

Other functionality risk factors include anonymity, user connections (such as friending and following), group messaging, and ability to post or send location information.

Designation of general purpose functionality as a risk factor reaches a high point with hyperlinks. Since terrorists and other potential perpetrators can use hyperlinks to point people to illegal material, hyperlinks can be designated as a risk factor despite not being inherently harmful.

It is worth recalling what the ECtHR said in Magyar Jeti Zrt (ECtHR) about the central role of hyperlinks in internet communication:

“Furthermore, bearing in mind the role of the Internet in enhancing the public’s access to news and information, the Court points out that the very purpose of hyperlinks is, by directing to other pages and web resources, to allow Internet users to navigate to and from material in a network characterised by the availability of an immense amount of information. Hyperlinks contribute to the smooth operation of the Internet by making information accessible through linking it to each other.”

General purpose functionality as a risk factor was foreshadowed in the June 2021 DCMS paper. Arguably it went further, asserting in effect that providing a platform for users to communicate with each other is itself a risk-creating activity:

          “Your users may be at increased risk of online harms if your platform allows them to:

  • interact with each other, such as through chat, comments, liking or tagging
  • create and share text, images, audio or video (user-generated content)”

In the context of the internet in the 21st century, this list of features describes commonplace aspects of the ability to communicate electronically. In a former age we might equally have said that pen, paper, typewriter and the printing press are risk factors, since perpetrators of wrongdoing may use written communications for their nefarious purposes.

Whilst Ofcom recognises the potential freedom of expression implications of treating general purpose functionalities as illegality risk factors, it always has to be borne in mind that from a fundamental rights perspective the starting point is that speech is a right, not a risk. Indeed the Indian Supreme Court has held that the right of freedom of expression includes the reach of online individual speech:

"There is no dispute that freedom of speech and expression includes the right to disseminate information to as wide a section of the population as is possible."

That is not to suggest that freedom of expression is an absolute right. But any interference has to constitute a sufficiently clear and precise rule (especially from the perspective of the user whose expression is liable to be interfered with), then satisfy necessity and proportionality tests.

Preventative technological measures

A preventative approach to safety by design can easily lean towards technological measures: since this is a technology product, technological preventative measures should be designed in to the service and considered at the outset.

Professor Woods [3], argues that:

“Designing for safety (or some other societal value) does not equate to techno-solutionism (or techno-optimism); the reliance on a “magic box” to solve society’s woes or provide a quick fix.”

However, in the hands of government and regulators it has a strong tendency to do so.[4].  Indeed the draft SSP devotes one of its five key priorities to Technology and Innovation, opening with:

“Technology is vital to protecting users online and for platforms fulfilling their duties under the Act.”

Later:

“It is not enough that new, innovative solutions to known problems exist – online service providers must also adopt and deploy these solutions to improve user safety. … The government … encourages Ofcom to be ambitious in its [code of practice] recommendations and ensure they maintain pace with technology as it develops.”

We have already seen that in the draft SSP, safety by design is said to include deploying technology in content moderation processes.

On the basis of prevention, an inbuilt technological design measure that reduces the amount of (or exposure to) illegal user speech or activity should be preferable to hiring legions of content moderators when the platform starts operating.

However, translating duties of care or safety by design into automated or technology-assisted content moderation can come into conflict with an approach in which non-content-specific safety features are seen as preferable.

Professor Woods said in the same paper:

“At the moment, content moderation seems to be in tension with the design features that are influencing the creation of content in the first place, making moderation a harder job. So, a “by design” approach is a necessary precondition for ensuring that other ex post responses have a chance of success.

While a “by design” approach is important, it is not sufficient on its own; there will be a need to keep reviewing design choices and updating them, as well as perhaps considering ex post measures to deal with residual issues that cannot be designed out, even if the incidence of such issues has been reduced.”

As to what ex post measures might consist of, in a letter to The Times in August, Professor Woods said:

“Through a duty of care, service operators are required to ensure that their products are as safe as reasonably possible and to take steps to mitigate unintended consequences. Essentially this is product safety, or health and safety at work. This approach allows a range of interventions that do not rely on content take-down and, indeed, could be content-neutral. One example might be creator reward programmes that incentivise the spreading of clickbait material. (emphasis added)].

Maeve Walsh, writing for the Online Safety Network shortly before publication of the draft SSP [5], contrasted safety by design with thinking about the OSA “primarily as a takedown-focused regime, centering on individual pieces of content.”

Content-neutrality suggests that a safety measure in relation to a functional feature should, rather than relating specifically to some kind of illegal or harmful content, either have no effect on content as such or, if it does affect user content, do so agnostically.

Some measures have no direct effect on user content: a help button would be an example. Others may affect content, but are not targeted at particular kinds of content: for instance, a friction-reducing measure like capping the permissible number of reposts, or other measures inhibiting virality.

A measure such as a quantititive cap on the use of some feature has the advantage from a rule of law perspective that it can be clearly and precisely articulated. However, by virtue of the fact that it constrains legitimate as well as illegitimate user speech across the board, it is potentially vulnerable to proportionality objections.

Thanks to the difficulty of making accurate illegality judgements, automated content filtering and blocking technologies are potentially at risk on both scores.

[1] Trust & Safety Professional Association. Safety by Design Curriculum chapter.

[2] Australian eSafety Commissioner. Safety by Design.

[3] Professor Lorna Woods, for the Online Safety Network (October 2024). Safety by Design

[4] Maria P. Angel, danah boyd (12 March 2024). Proceedings of 3rd ACM Computer Science and Law Symposium (CSLAW’24) Techno-legal Solutionism: Regulating Children’s Online Safety in the United States.

[5] Maeve Walsh, for the Online Safety Network (11 October 2024). Safety by design: has its time finally come?