Wednesday, 4 December 2024

Safe speech by design

Proponents of a duty of care for online platforms have long dwelt on the theme of safety by design. It has come to the fore again recently with the government’s publication of a draft Statement of Strategic Priorities (SSP) for Ofcom under the Online Safety Act.  Safety by Design is named as one of five key areas.

Ofcom is required to have regard to the final version of the SSP in carrying out its functions under the Act. Given Ofcom’s regulatory independence the government can go only so far in suggesting how Ofcom should do its job. But, in effect, the SSP gives Ofcom a heavy hint about the various directions in which the government would like it to go.

So what does safety by design really mean? How might it fit in with platform (U2U) and search engine duties under the Online Safety Act (OSA)?

Before delving into this, it is worth emphasising that although formulations of online platform safety by design can range very widely [1] [2], for the purposes of the OSA safety by design has to be viewed through the lens of the specific duties imposed by the Act.

This piece focuses on the Act’s U2U illegality duties. Three of the substantive duties concern design or operation of the service:

  • Preventing users encountering priority illegal content by means of the service (S. 10(2)(a))
  • Mitigating and managing the risk of the service being used for the commission or facilitation of a priority offence (as identified in the most recent illegal content risk assessment of the service) (S. 10(2)(b))
  • Mitigating and managing the risks of physical or psychological harm to individuals (again as identified in the most recent illegal content risk assessment) (S. 10(2)(c))

Two further substantive illegality duties are operational, relating to:

  • Minimising the length of time for which priority illegal content is present on the service (S. 10(3)(a))
  • Swiftly taking down illegal content where the service provider is alerted to or otherwise becomes aware of its presence. (S. 10(3)(b))

S.10(4) of the Act provides examples of the areas of design, operation and use of a service to which the duties apply and, if proportionate, require the service provider to take or use measures. Those include “design of functionalities, algorithms and other features.”

Safety by design in the Online Safety Act

When applied to online speech, the notion of safety by design prompts some immediate questions: What is safety? What is harm?

The OSA is less than helpful about this. It does not define safety, or safety by design. It defines harm as physical or psychological harm, but that term appears in only one of the five substantive illegality duties outlined above. Harm has more a pronounced, but not exclusive, place in the prior illegal content risk assessment that a platform is required to undertake.

Safety by design gained particular prominence with a last-minute House of Lords addition to the Bill: an introductory ‘purpose’ clause. This amendment was the result of cross-party collaboration between the then Conservative government and the Labour Opposition.

What is now Section 1 proclaims (among other things) that the Act provides for a new regulatory framework which has the:

“general purpose of making the use of internet services regulated by this Act safer for individuals in the United Kingdom.”

It goes on that to achieve that purpose, the Act (among other things):

“imposes duties which, in broad terms, require providers of services regulated by this Act to identify, mitigate and manage the risks of harm (including risks which particularly affect individuals with a certain characteristic) from

(i) illegal content and activity, and

(ii) content and activity that is harmful to children, …”

Finally, and most relevantly, it adds that:

“Duties imposed on providers by this Act seek to secure (among other things) that services regulated by this Act are … safe by design…”.

A purpose clause is intended to assist in the interpretation of the legislation by setting out the purposes for Parliament intended to legislate, rather than leaving the courts to infer them from the statutory language.

Whether such clauses in fact tend to help or hinder is a matter of lawyerly debate. This clause is especially confusing in its use of terms that are not defined by the Act and do not have a clear and obvious ordinary meaning (“safe” and “safe by design”), mixed up with terms that are specifically defined in the legislation (“harm”, meaning physical or psychological harm).

One thought might be that “safe” means safe from physical or psychological harm, and that “safe by design” should be understood accordingly. However, that seems unlikely since four of the five substantive illegality duties on service providers relate to illegal content and activity per se, irrespective of whether they might involve a risk of physical or psychological harm to individuals.

S.235 defines Ofcom’s “online safety functions” in terms of all its functions under the Act. In contrast, the transitional provisions for Video Service Providers define “safety duties” in terms focused on platform duties in respect of illegality and harm to children.

Similarly, in the earlier part of the Act only those two sets of duties are described (albeit merely in the section headings) as “safety duties”. “Safe by design” may possibly refer to those duties alone.   

The concept of safety by design tends to embody some or all of a number of elements: risk-creating features; prevention and reduction of harm; achieving those by appropriate design of a risk-creating feature, or by adding technical safeguards.

The most general aspect of safety by design concerns timing: that safety should be designed in from the outset rather than thought about afterwards.

Prevention itself has a temporal aspect, but that may relate as much to the kind of measure as to the stage of development at which it should be considered. Thus the Minister’s introduction to the Statement of Strategic Priorities says that it:

“includes ensuring safety is baked into platforms from the start so more harm is caught before it occurs”.

This could refer to the point at which a safety measure intervenes in the user’s activity, as opposed to (or as well as) the stage at which the designers consider it.

Later in the Statement, safety by design is expressly said to include deploying technology in content moderation processes. Providers would be expected to:

“…embed proportionate safety by design principles to mitigate the [risk of their service being used to facilitate illegal activity]. This should include steps such as … where proportionate, deploying technology to improve the scale and effectiveness of content moderation, considering factors including providers’ capacity and users’ freedom of expression and privacy rights.”

An analogy with product safety could suggest that safety by design is about identifying risk-creating features at the design stage and either designing those features in the safest way or incorporating safeguards. That aspect is emphasised by Professor Lorna Woods in a recent paper [3]:

“The objective of ‘safety by design’ is – like product safety – to reduce the tendency of a given feature or service to create or exacerbate such issues.”

Applied to products like cars that would mean that you should consider at the outset where safely to position the fuel tank, not unthinkingly place it somewhere dangerous and try to remedy the problem down the line, or after an accident has happened. Or, if a piece of machinery has a sharp cutting blade, consider at the outset how to add a guard into the design. A culture of safety by design should help to ensure that potential safety risks are considered and not overlooked.  

However, a focus on risk-creating features gives rise to particular difficulties when safety by design is translated to online platforms.

The underlying duty of care reasons for this have been rehearsed on previous occasions (here and here). In short, speech is not a tripping hazard, nor is it a piece of machinery. A cutting machine that presents a risk of physical injury to its operator is nothing like a space in which independent, sentient human beings can converse with each other and choose what to say and do.

Professor Woods [3] suggests that ‘by design’ seeks to ensure that products respect the law (my emphasis). If that is right, then by the same token it could be said that safety by design when applied to online platforms seeks to ensure that in their communications with each other users respect the law (or boundaries of harm set by the legislation). That is a materially different exercise, for which analogies with product safety can be taken only so far.

The June 2021 DCMS/DSIT paper Principles of safer online platform design opened with the statement that:

“Online harms can happen when features and functions on an online platform create a risk to users’ safety.”

For the illegality duties imposed by the OSA, when we set about identifying concrete features and functionalities that are said to create or increase risk of illegality, we run into problems when we move beyond positive platform conduct such as recommender and curation algorithms.

The example of recommender and curation algorithms has the merit of focusing on a feature that the provider has designed and which can causally affect which user content is provided to other users.

But the OSA duties of care – and thus safety by design - go well beyond algorithmic social media curation, extending to (for instance) platforms that do no more than enable users to post to a plain vanilla discussion forum.

Consider the OSA safety duties concerning priority illegal content and priority offences.  What kind of feature would create or increase a risk of, for example, an online user deciding to offer boat trips across the Channel to aspiring illegal immigrants?

The further we move away from positive content-related functionality, the more difficult it becomes to envisage how safety by design grounded in the notion of specific risk-creating features and functions might map on to real-world technical features of online platforms.

The draft SSP confirms that under the OSA safety by design is intended to be about more than algorithms:

“When we discuss safety by design, we mean that regulated providers should look at all areas of their services and business models, including algorithms and functionalities, when considering how to protect all users online. They should focus not only on managing risks but embedding safety outcomes throughout the design and development of new features and functionalities, and consider how to make existing features safer.”

Ofcom faced the question of risk-creating features when preparing the risk profiles that the Act requires it to provide for different kinds of in-scope service. For the U2U illegality risk profile it has to:

“carry out risk assessments to identify and assess the following risks of harm presented by [user to user] services of different kinds—

(a) the risks of harm to individuals in the United Kingdom presented by illegal content present on regulated user-to-user services and by the use of such services for the commission or facilitation of priority offences; …”

The risks that Ofcom has to identify and assess, it should be noted, are not the bare risk of illegal content or illegal activity, but the risk of harm (meaning physical or psychological harm) to individuals presented by such content or activity.

Ofcom is required to identify characteristics of different kinds of services that are relevant to those risks of harm, and to assess the impact of those kinds of characteristics on such risks. “Characteristics” of a service include its functionalities, user base, business model, governance and other systems and processes.

Although a platform has to carry out its own illegal content risk assessment, taking account of Ofcom’s risk profile, the illegality risks that the platform has to assess also include bare (non-harm-related) illegality.

Ofcom recognises that functionalities are not necessarily risk-creating:

“Functionalities in general are not inherently positive nor negative. They facilitate communication at scale and reduce frictions in user-to-user interactions, making it possible to disseminate both positive and harmful content. For example, users can engage with one another through direct messaging and livestreaming, develop relationships and reduce social isolation. In contrast, functionalities can also enable the sharing of illegal material such as livestreams of terrorist atrocities or messages sent with the intent of grooming children.” [6W.16]

Ofcom overcomes this issue in its proposed risk profiles by going beyond characteristics that of themselves create or increase risks of illegality. This is most clearly expressed in Volume 2 of its Illegal Harms Consultation:

“We recognise that not all characteristics are inherently harmful; we therefore use the term ‘risk factor’ to describe a characteristic for which there is evidence of a risk of harm to individuals. For example, a functionality like livestreaming is not inherently risky but evidence has shown that it can be abused by perpetrators; when considering specific offences such as terrorism or CSEA, a functionality like livestreaming can give rise to risk of harm or the commission or facilitation of an offence.” [5.26]

General purpose functionality and features of online communication can thus be designated as risk factors, on the basis that there is evidence that wrongdoers make use of them or, in some instances, certain combinations of features.

Since measures focused on general purpose features are likely to be vulnerable to objections of disproportionate interference with freedom of expression, for such features the focus of preventing or mitigating the identified risk is more likely to be on other aspects of the platform’s design, on user options and controls in relation to that feature (e.g. an option to disable the feature), or on measures such as content moderation.

Ofcom implicitly recognises this in the context of livestreaming:

“6.11 We acknowledge that some of the risk factors, which the evidence has demonstrated are linked to a particular kind of illegal harm, can also be beneficial to users. This can be in terms of the communication that they facilitate, or in some cases fulfilling other objectives, such as protecting user privacy. …

6.13 While livestreaming can be a risk factor for several kinds of illegal harm as it can allow the real-time sharing of illegal content, it also allows for real-time updates in news, providing crucial information to a wide-range of individuals.

6.14 These considerations are a key part of the analysis underpinning our Codes measures.”

The result is that while the illegality risk profiles that Ofcom has proposed include as risk factors a range of platform features that could be viewed as general purpose, they tend not to translate into recommended measures aimed at inhibiting that feature.

Here is a selection of features included in the proposed illegality risk profile:

Service feature

Risk (likelihood of increased risk of harm related to offences involving):

Ability to create user profiles

Grooming, harassment, stalking, threats, abuse, drugs and psychoactive substances, unlawful immigration, human trafficking, sexual exploitation of adults;

and for the risk of fake profiles:

Grooming, harassment, stalking, threats, abuse, controlling or coercive behaviour, proceeds of crime, fraud and financial services, foreign interference offences.

Users can form user groups

Grooming, encouraging or assisting suicide or serious self-harm, drugs and psychoactive substances, unlawful immigration, human trafficking.

Livestreaming

Terrorism, grooming, image-based CSAM, encouraging or assisting suicide or serious self-harm, harassment, stalking, threats, abuse.

Direct messaging

Grooming and CSAM, hate, harassment, stalking, threats, abuse, controlling or coercive behaviour, intimate image abuse, fraud and financial services offences.

Encrypted messaging

Terrorism, grooming, CSAM, drugs and psychoactive substances, sexual exploitation of adults, foreign interference, fraud and financial services offences.

Ability to comment on content

Terrorism, grooming, encouraging or assisting suicide or serious self-harm, hate, harassment, stalking, threats, abuse.

Ability to post images or videos

Terrorism, image-based CSAM, encouraging or assisting suicide or serious self-harm, controlling or coercive behaviour, drugs and psychoactive substances, extreme pornography, intimate image abuse.

Ability to repost or forward content

Encouraging or assisting suicide or serious self-harm, harassment, stalking, threats, abuse, intimate image abuse, foreign interference. 

Ability to search for user generated content

Drugs and psychoactive substances, firearms and other weapons, extreme pornography, fraud and financial services offences.

Hyperlinks

Terrorism, CSAM URLs, foreign interference offences.

 Other functionality risk factors include anonymity, user connections (such as friending and following), group messaging, and ability to post or send location information.

Designation of general purpose functionality as a risk factor reaches a high point with hyperlinks. Since terrorists and other potential perpetrators can use hyperlinks to point people to illegal material, hyperlinks can be designated as a risk factor despite not being inherently harmful.

It is worth recalling what the ECtHR said in Magyar Jeti Zrt (ECtHR) about the central role of hyperlinks in internet communication:

“Furthermore, bearing in mind the role of the Internet in enhancing the public’s access to news and information, the Court points out that the very purpose of hyperlinks is, by directing to other pages and web resources, to allow Internet users to navigate to and from material in a network characterised by the availability of an immense amount of information. Hyperlinks contribute to the smooth operation of the Internet by making information accessible through linking it to each other.”

General purpose functionality as a risk factor was foreshadowed in the June 2021 DCMS paper. Arguably it went further, asserting in effect that providing a platform for users to communicate with each other is itself a risk-creating activity:

          “Your users may be at increased risk of online harms if your platform allows them to:

  • interact with each other, such as through chat, comments, liking or tagging
  • create and share text, images, audio or video (user-generated content)”

In the context of the internet in the 21st century, this list of features describes commonplace aspects of the ability to communicate electronically. In a former age we might equally have said that pen, paper, typewriter and the printing press are risk factors, since perpetrators of wrongdoing may use written communications for their nefarious purposes.

Whilst Ofcom recognises the potential freedom of expression implications of treating general purpose functionalities as illegality risk factors, it always has to be borne in mind that from a fundamental rights perspective the starting point is that speech is a right, not a risk. Indeed the Indian Supreme Court has held that the right of freedom of expression includes the reach of online individual speech:

"There is no dispute that freedom of speech and expression includes the right to disseminate information to as wide a section of the population as is possible."

That is not to suggest that freedom of expression is an absolute right. But any interference has to constitute a sufficiently clear and precise rule (especially from the perspective of the user whose expression is liable to be interfered with), then satisfy necessity and proportionality tests.

Preventative technological measures

A preventative approach to safety by design can easily lean towards technological measures: since this is a technology product, technological preventative measures should be designed in to the service and considered at the outset.

Professor Woods [3], argues that:

“Designing for safety (or some other societal value) does not equate to techno-solutionism (or techno-optimism); the reliance on a “magic box” to solve society’s woes or provide a quick fix.”

However, in the hands of government and regulators it has a strong tendency to do so.[4].  Indeed the draft SSP devotes one of its five key priorities to Technology and Innovation, opening with:

“Technology is vital to protecting users online and for platforms fulfilling their duties under the Act.”

Later:

“It is not enough that new, innovative solutions to known problems exist – online service providers must also adopt and deploy these solutions to improve user safety. … The government … encourages Ofcom to be ambitious in its [code of practice] recommendations and ensure they maintain pace with technology as it develops.”

We have already seen that in the draft SSP, safety by design is said to include deploying technology in content moderation processes.

On the basis of prevention, an inbuilt technological design measure that reduces the amount of (or exposure to) illegal user speech or activity should be preferable to hiring legions of content moderators when the platform starts operating.

However, translating duties of care or safety by design into automated or technology-assisted content moderation can come into conflict with an approach in which non-content-specific safety features are seen as preferable.

Professor Woods said in the same paper:

“At the moment, content moderation seems to be in tension with the design features that are influencing the creation of content in the first place, making moderation a harder job. So, a “by design” approach is a necessary precondition for ensuring that other ex post responses have a chance of success.

While a “by design” approach is important, it is not sufficient on its own; there will be a need to keep reviewing design choices and updating them, as well as perhaps considering ex post measures to deal with residual issues that cannot be designed out, even if the incidence of such issues has been reduced.”

As to what ex post measures might consist of, in a letter to The Times in August, Professor Woods said:

“Through a duty of care, service operators are required to ensure that their products are as safe as reasonably possible and to take steps to mitigate unintended consequences. Essentially this is product safety, or health and safety at work. This approach allows a range of interventions that do not rely on content take-down and, indeed, could be content-neutral. One example might be creator reward programmes that incentivise the spreading of clickbait material. (emphasis added)].

Maeve Walsh, writing for the Online Safety Network shortly before publication of the draft SSP [5], contrasted safety by design with thinking about the OSA “primarily as a takedown-focused regime, centering on individual pieces of content.”

Content-neutrality suggests that a safety measure in relation to a functional feature should, rather than relating specifically to some kind of illegal or harmful content, either have no effect on content as such or, if it does affect user content, do so agnostically.

Some measures have no direct effect on user content: a help button would be an example. Others may affect content, but are not targeted at particular kinds of content: for instance, a friction-reducing measure like capping the permissible number of reposts, or other measures inhibiting virality.

A measure such as a quantititive cap on the use of some feature has the advantage from a rule of law perspective that it can be clearly and precisely articulated. However, by virtue of the fact that it constrains legitimate as well as illegitimate user speech across the board, it is potentially vulnerable to proportionality objections.

Thanks to the difficulty of making accurate illegality judgements, automated content filtering and blocking technologies are potentially at risk on both scores.

[1] Trust & Safety Professional Association. Safety by Design Curriculum chapter.

[2] Australian eSafety Commissioner. Safety by Design.

[3] Professor Lorna Woods, for the Online Safety Network (October 2024). Safety by Design

[4] Maria P. Angel, danah boyd (12 March 2024). Proceedings of 3rd ACM Computer Science and Law Symposium (CSLAW’24) Techno-legal Solutionism: Regulating Children’s Online Safety in the United States.

[5] Maeve Walsh, for the Online Safety Network (11 October 2024). Safety by design: has its time finally come? 



Wednesday, 30 October 2024

Data Protection meets the Online Safety Act

This is sixth and final instalment in a series of reflections on Ofcom’s Illegal Harms consultation under the Online Safety Act 2023. Ofcom is due to publish the final version of its Illegal Harms Codes of Practice and Guidance in December.

The interaction between data protection law and the Online Safety Act’s illegal content duties attracted almost no attention during the passage of the Bill through Parliament. Nor does data protection garner more than a bare mention in the body of the Act itself. Nevertheless, service providers will have to perform their obligations compatibly with data protection laws.

However, data protection law does not sit entirely neatly alongside the OSA. It overlaps and potentially collides with some of the substantive measures that the Act requires service providers to take. This creates tensions between the two regimes.

As the process of implementing the OSA’s service provider duties has got under way, more attention has been directed to how the two regimes fit together.

At the most general level, while the Bill was still under discussion, on 25 November 2022 the ICO and Ofcom published a Joint Statement on online safety and data protection. This was an aspirational document, setting out shared goals of maximising coherence between the data protection and online safety regimes and working together to promote compliance with them. It envisaged a renewed formal memorandum of understanding between the ICO and Ofcom (yet to appear).  A more detailed Joint Statement on collaboration between the two regulators was issued on 1 May 2024.

The November 2022 statement recognised that:

“there are sometimes tensions between safety and privacy. For example, to protect users’ safety online, services might need to collect more information about their users, the content they view and their behaviour online. To protect users’ privacy, services can and should limit this data collection to what is proportionate and necessary.”

It went on:

“Where there are tensions between privacy and safety objectives, we will provide clarity on how compliance can be achieved with both regimes.”

On 16 February 2024, a week before the end of Ofcom’s Illegal Harms consultation period, the Information Commissioner’s Office published 47 pages of guidance on how data protection law applies to online content moderation processes - including moderation carried out to comply with duties under the Online Safety Act. It avowed an aim to support organisations carrying out content moderation in scope of the Act. Four months later, the ICO invited feedback on its Guidance.

Ofcom’s Illegal Harms consultation is itself liberally garnished with warnings that data protection law must be complied with, but less generously endowed with concrete guidance on exactly how to do so.

The ICO Guidance, although it put some flesh on the bones, was still pitched at a relatively high level. That was a deliberate decision. The accompanying Impact Assessment records that the ICO considered, but rejected, the option of:

“more extensive guidance discussing in depth how data protection law applies when developing or using content moderation”

in favour of:

“High level guidance setting out the ICO’s preliminary data protection and privacy expectations for online content moderation, and providing practical examples, with plans for further work as the policy area develops”.

The reason for this decision was that:

“it provides some degree of clarity for a wide variety of stakeholders, whilst still allowing the necessary flexibility for our policy positions to develop during the early stages of Ofcom’s policy and guidance development”.

The next, and perhaps most interesting, document was the ICO’s own submission to Ofcom’s Illegal Harms consultation, published on 1 March 2024. In this document the tensions between the OSA and data protection are most evident. In some areas the ICO overtly took issue with Ofcom’s approach.

What are some of the potential areas of tension?

Illegality risk assessment

The Ofcom consultation suggests that for the illegality risk assessment required under S.9 OSA service providers should, among other things, consider the following ‘core input’ to the risk assessment:

“assess any other evidence they hold that is relevant to harms on their service. This could include any existing harms reporting, research held by the service, referrals to law enforcement, data on or analysis of user behaviour relating to harms or product testing. Any types of evidence listed under Ofcom’s enhanced inputs (e.g. the results of content moderation, product testing, commissioned research) that the business already collects and which are relevant to the risk assessment, should inform the assessment. In effect, if the service already holds these inputs, they should be considered as core inputs.” [Table 9.4]

Ofcom adds that:

“… any use of users’ personal data (including any data that is not anonymised), will require services to comply with their obligations under UK data protection law. Services will need to make judgments on the data they hold to ensure it is processed lawfully, including providing appropriate transparency to users when the data is collected or further processed.” [Table 9.4]

The topic of core inputs caught the eye of the ICO, which observed:

“A key data protection consideration when processing personal data for risk assessment is the data minimisation principle set out in Article 5(1)(c) of the UK GDPR. This requires the personal data that services process to be adequate, relevant and limited to what is necessary in relation to the purposes for which it is processed. This means that services should identify the minimum amount of personal data they need to fulfil their purpose.” [p.7 – p.8]

Illegality judgements and data minimisation

Data minimisation is, more generally, an area of potential tension between the two regimes.

The Act requires service providers to make judgements about the illegality of user content. The less information is available to a service provider, the greater the risk of making arbitrary judgements (with potential ECHR implications). But the more information is retained or collected in order to make better judgements, the greater the potential conflict with the data minimisation principle (UK GDPR Article 5(1)(c)).

Section 192 of the OSA requires service providers to make illegality judgments on the basis of all relevant information reasonably available to a service provider. Ofcom’s Illegal Harms consultation document suggests that what is regarded as “reasonably available” may be limited by the constraints of data protection law:

“However, we recognise that in certain instances services may have access to information, which is relevant to a specific content judgement, but which is not either typically available to all services, which would require significant resources to collect, or the use of which would not be lawful under data protection or privacy laws. When making illegal content judgments, services should continue to have reasonable regard to any other relevant information to which they have access, above and beyond what is set out in the Illegal Content Judgements Guidance but only so long as this information is processed lawfully, including in particular in line with data protection laws.” [26.27]

The ICO cited this (and a related more equivocal passage at [A1.67]) as an example of where the Ofcom guidance is “less clear about the approach that services should take to balancing the need to make [illegal content judgements] with the need to comply with data minimisation.” The ICO said:

“The data minimisation principle requires that personal data being processed be relevant, adequate, and limited to what is necessary. Where an [illegal content judgement] can be made accurately without the need to process the additional personal data held by a service it would not be necessary for a service to process this information under data protection law. …

The text could also clarify that services may not always need to consult all available information in every instance, if it is possible to make an accurate judgement using less information.” [page 25)

There is, however, a lurking paradox of unknown unknowns. For an offence of a kind for which factual context is important, the service provider knows that relevant contextual information could exist, but does not know if it does exist. Such information (if it does exist) may or may not be available to the service provider: it could be wholly off platform and beyond its knowledge; or it could be accessible on the platform in principle, but potentially constrained by data protection law.

Without knowing whether further relevant contextual information in fact exists, how is a provider to determine whether it is able to make an accurate judgement with only the information that it knows about? How can a provider know whether further relevant information exists without going looking for it, potentially breaching data protection law in the process? The ICO Guidance says:

“You are complying with the data protection minimisation principle, as long as you can demonstrate that using [other contextual] information is: - necessary to achieve your purpose (e.g. because it ensures your decisions are accurate and fair ..."

Automated content moderation

The OSA contemplates that an Ofcom Code of Practice may for some use cases recommend service providers to undertake fully automated content moderation. The UK GDPR contains specific provisions in Article 22 about certain solely automated processing of personal data.

The Ofcom consultation says:

“Insofar as services use automated processing in content moderation, we consider that any interference with users’ rights to privacy under Article 8 ECHR would be slight. Such processing would need to be undertaken in compliance with relevant data protection legislation (including, so far as the UK GDPR applies, rules about processing by third parties or international data transfers).” [12.72]

Similar statements are made elsewhere in the consultation.

The ICO disagrees with the first sentence:

“From a data protection perspective, we do not agree that the potential privacy impact of automated scanning is slight. Whilst it is true that automation may be a useful privacy safeguard, the moderation of content using automated means will still have data protection implications for service users whose content is being scanned. Automation itself carries risks to the rights and freedoms of individuals, which can be exacerbated when the processing is carried out at scale.” [p.12]

It goes on:

“Our guidance on content moderation is clear that content moderation involves personal data processing at all stages of the moderation process, and hence data protection must be considered at all stages (including when automated processing is used, not just when a human looks at content). [p.12]

The ICO took the view that from a data protection law perspective, Ofcom’s proposed safeguards for the three recommended automated content moderation measures (CSAM perceptual hash matching, CSAM URL matching and fraud fuzzy keyword detection) are incomplete.

As to UK GDPR Article 22, the ICO commented in relation to the series of recommended measures that involve (or may involve) automated processing:

“The automated content moderation measures have the potential to engage UK GDPR Article 22, particularly measures 4G [Hash matching for CSAM] and I [Keyword detection regarding articles for use in frauds]. Article 22 of the UK GDPR places restrictions about when services can carry out solely automated decision-making based on personal information where the decision has legal or similarly significant effects. … To achieve coherence across the regimes it is important that the recommended code measures are compatible with UK GDPR Article 22 requirements.”

It is worth noting that UK GDPR Article 22 permits such solely automated decisions to be made where required by “domestic law”, provided that this sets out suitable safeguards. The new Data (Use and Access) Bill includes some changes to the Article 22 provisions.

Accuracy of illegal content judgements

Data protection law also encompasses an accuracy principle – UK GDPR Article 5(1)(d). The ICO Guidance assesses the application of this principle to be limited to the accuracy of facts underlying content judgements and to accurate recording of those judgements. However, the ICO Guidance also appears to suggest that the separate fair processing principle (UK GDPR Article 5(1)(a)) could have implications for the substantive accuracy of judgements themselves:

“You are unlikely to be treating users fairly if you make inaccurate judgements or biased moderation decisions based on their personal information."

If substantive accuracy could be both a fair processing matter and an OSA issue, how might this manifest itself? Some examples are:

Reasonably available information As already noted, S.192 of the OSA requires a service provider to make illegality judgements on the basis of all information reasonably available to it. Again as already noted, what is regarded as ‘reasonably available’ may be affected by data protection law, especially the data minimisation requirement to demonstrate that processing the data is necessary to achieve the purpose of an accurate and fair illegality judgement.

As to necessity, could the ‘unknown unknowns’ paradox (see above) come into play: how can a provider know if contextual information is available and relevant to the accuracy of the judgement that it has to make without seeking out the information? Could necessity be established simply on the basis that it is the kind of offence to which contextual information (if it existed) could be relevant, or would there have to be some justification specific to the facts of the matter under consideration, such as an indication that further information existed?

Generally, in connection with the information that service providers may use to make illegal content judgements, Ofcom says:

“When making illegal content judgements, services should continue to have reasonable regard to any other relevant information to which they have access, above and beyond what is set out in the Illegal Content Judgements Guidance but only so long as this information is processed lawfully, including in particular with data protection laws”; [26.27]

One example of this area of potential tension between the OSA and data protection is reference back to previous user complaints when making a judgement about content. In its section on handling user complaints, Ofcom’s consultation says:

“To the extent that a service needed to retain information to process complaints, this may include personal data. However, we are not proposing to recommend that services should process or retain any extra information beyond the minimum needed to comply with duties which are clearly set out on the face of the Act. To the extent that services choose to do so, this data would be held by the service subject to data protection laws.” [16.113]

Elsewhere, Ofcom says that “depending on the context, reasonably available information may include … complaints information” ([26.26, A1.66], subject to the proviso that:

“processing some of the types of information (‘data’) listed below has potential implications for users’ right to privacy. Services also need to ensure they process personal data in line with data protection laws. In particular, the likelihood is high that in considering U2U content a service will come across personal data including special category data and possibly criminal offence data, to which UK data protection laws apply.” [26.25]  

The ICO Guidance said:

“Data minimisation still applies when services use personal information to make illegal content judgements under section 192 of the OSA. Under data protection law, this means you must use personal information that is proportionate and limited to what is necessary to make illegal content judgements. …

Moderation of content can be highly contextual. Sometimes, you may need to use other types of personal information (beyond just the content) to decide whether you need to take moderation action, including users’ … records of previous content policy violations. …

You are complying with the data minimisation principle, as long as you can demonstrate that using this information is:

-        necessary to achieve your purpose (eg because it ensures your decisions are accurate and fair);

-        and no less intrusive option is available to achieve this.”

The ICO submission to the Ofcom consultation says:

“Paragraphs 16.26-27 of the consultation document state that Ofcom decided not to include a recommendation for services to keep complaints data to facilitate appeals as part of this measure. However, other consultation measures require or recommend the further use of complaints data, for example the risk assessment guidance, illegal content judgements guidance… . We think that it is important that the overall package of measures make clear what information Ofcom considers necessary for services to retain and use to comply with online safety obligations. This will help services to feel confident about complying with their data protection obligations.” [page 18]

Assuming that a service provider does have access to all relevant information, if substantive accuracy of an illegal content judgement were an aspect of the data protection fair processing principle, might that connote a degree of certainty that differs from the Act’s ‘reasonable grounds to infer’ standard in S.192 or the ‘awareness’ standard in Section 10(3)(b) (if they be different from each other)?

NCA reporting

Related to both automated processing and accuracy are the ICO’s submission comments on the obligation under S.66 for a service provider who becomes aware of previously unreported UK-linked CSAM to report it to the National Crime Agency.

The Ofcom consultation notes:

“Interference with users’ or other individuals’ privacy rights may also arise insofar as the option would lead to reporting to reporting bodies or other organisations in relation to CSAM detected using perceptual hash matching technology – for example, that a user was responsible for uploading content detected as CSAM to the service.” [14.80]

Ofcom goes on:

“As explained above, use of perceptual hash matching can result in cases where detected content is a false positive match for CSAM, or a match for content that is not CSAM and has been wrongly included in the hash database. These cases could result in individuals being incorrectly reported to reporting bodies or other organisations, which would represent a potentially significant intrusion into their privacy.” [14.84]

It adds:

“It is not possible to assess in detail the potential impact of incorrect reporting of users: the number of users potentially affected will depend on how services implement hash matching; while further details of the reporting requirements under the Act are to be specified by the Secretary of State in secondary legislation. However, the option includes principles and safeguards in relation to the hash database, the configuration of the technology, and the use of human moderators that are designed to help secure that the technology operates accurately. … [14.85]

In addition, reporting bodies have processes in place to triage and assess all reports received, ensuring that no action is taken in cases relating to obvious false positives. These processes are currently in place at NCMEC and will also be in place at the Designated Reporting Body in the NCA, to ensure that investigatory action is only taken in appropriate circumstances.” [14.86]

The ICO takes a less sanguine view of the consequences of reporting a false positive to the NCA:

“Ofcom refers to the principles and safeguards in the content moderation measures as being safeguards that are designed to help secure that the technology operates accurately in connection with user reports to the NCA. Accuracy is also a relevant consideration in data protection law. The accuracy principle requires that services take all reasonable steps to ensure that the personal data they process is not incorrect or misleading as to any matter of fact. Where content moderation decisions could have significant adverse impacts on individuals, services must be able to demonstrate that they have put sufficient effort into ensuring accuracy. 

We are concerned that the safeguards in measure 4G do not differentiate between the level of accuracy that is appropriate for reports to the NCA (which carries a particular risk of serious damage to the rights, freedoms and interests of a person who is incorrectly reported) and other significant but potentially less harmful actions such as content takedown. 

Our reading of measure 4G is that it could allow for the content moderation technology to be configured in such a way that recognises that false positives will be reported to the NCA. Whilst we acknowledge that it may not be possible to completely eliminate false positives being reported, we are concerned that a margin for error could be routinely “factored into” a service’s systems and processes as a matter of course.

This is unlikely to be compatible with a service taking all reasonable steps to ensure that the personal data it processes is not inaccurate.”

One point of particular interest is the ICO’s apparent distinction between the level of accuracy for content takedown and that for reporting to the NCA. Both Section 10(3)(b) (the takedown obligation) and Section 66 (as interpreted by Section 70) use the same language to trigger the respective obligations (emphasis added):

-        A duty to operate a service using proportionate systems and processes designed to where the provider is alerted by a person to the presence of any illegal content, or becomes aware of it in any other way, swiftly take down such content. (S.10(3)(b)

-        … must operate the service using systems and processes which secure (so far as possible) that the provider reports all detected and unreported CSEA content present on the service to the NCA. … CSEA content is “detected” by a provider when the provider becomes aware of the content, whether by means of the provider’s systems or processes or as a result of another person alerting the provider.  (S.66/70)

The ICO argument appears to suggest that data protection considerations should inform the construction of the sections, with the result that the same word ‘aware’ in the two OSA provisions would connote different levels of confidence in the accuracy of the information on which a decision is based.

End to end encryption

The Bill was loudly and repeatedly criticised by privacy and civil liberties campaigners for potentially threatening the ability to use end-to-end encryption on private messaging services. The provision that gave rise to this was what is now Section 121: a power for Ofcom to require, by notice to a U2U service provider, use of accredited technology to identify and swiftly take down, or prevent individuals encountering, CSEA content, whether communicated publicly or privately by means of the service. Service providers would have the option of developing or sourcing their own equivalent technology.

Under Section 125, a notice requiring the use of accredited technology is to be taken as requiring the provider to make such changes to the design or operation of the service as are necessary for the technology to be used effectively.

The concern with these provisions was that the effect of a notice could be to require a private messaging provider to cease using E2E encryption if it was incompatible with the technology required by the notice.

The S.121 power is self-standing, separate from the Act’s safety duties on service providers. It will be the subject of a separate Ofcom consultation, scheduled for December 2024.

Concomitantly, Ofcom is prevented from using its safety duty enforcement powers to require proactive technology to be used by a private communications service (S.136(6)). Broadly speaking, proactive technology is content identification technology, user profiling technology, or behaviour identification technology.

That is also reflected in the Schedule 4 restrictions on what Ofcom can recommend in a Code of Practice:

“Ofcom may not recommend in a Code of Practice the use of the technology to analyse user generated content communicated “privately”, or metadata relating to user-generated content communicated “privately” [14.14]

Thus, in effect, the Act’s safety duties cannot be interpreted so as to require a private communications service to use proactive technology. That is a matter for the self-standing S.121 power.

The Ofcom consultation states that E2E encryption is not inherently bad. It goes on to acknowledge the benefits of E2E encryption:

“The role of the new online safety regulations is not to restrict or prohibit the use of such functionalities, but rather to get services to put in place safeguards which allow users to enjoy the benefits they bring while managing risks appropriately” [Vol 2, Introduction]

Nevertheless, it also cites E2E encryption as a risk factor. For instance, end-to-end encryption poses the risk that “offenders often use end-to-end encrypted services to evade detection” [Vol 2, Introduction] and “end-to-end encryption guarantees a user’s privacy and security of messages, but makes it harder for users to moderate for illegal content.” [6.12]; and “Private messaging services with encryption are particularly risky, as they make the exchange of CSAM harder to detect.” [6C.139] “If your service allows encrypted messaging, we would expect you to consider how this functionality can be used by potential perpetrators to avoid monitoring of communications while sharing illegal content such as CSAM or conducting illegal behaviour.” [Table 14] The theme is repeated numerous times in relation to different offences.

In relation to its specific proposal for ‘hash matching’ to detect and remove known CSAM (Child Sexual Abuse Material), Ofcom says:

“Consistent with the restrictions in the Act, this proposal does not apply to private communications or end-to-end encrypted communications. We are not making any proposals that would involve breaking encryption. However, end-to-end encrypted services are still subject to all the safety duties set out in the Act and will still need to take steps  to mitigate risks of CSAM on their services” [Overview]

The ICO did not challenge Ofcom’s evidence bases for concluding that E2E encryption was a risk. However, it said:

“We are concerned that the benefits of these functionalities are not given enough emphasis in the risk assessment guidance and risk profiles (Annex 5). These are the documents that U2U services are most likely to consult on a regular basis. We consider that there is a risk that the risk assessment process may be interpreted by some services to mean that functionalities such as E2EE and anonymity/pseudonymity are so problematic from an online safety perspective that they should be minimised or avoided. If so, the risk assessment process could create a chilling effect on the deployment of functionalities that have important benefits, including keeping users safe online.”

and:

“In summary, we therefore suggest that the guidance should make it clear that the online safety regime does not restrict or prohibit the use of these functionalities and that the emphasis is on requiring safeguards to allow users to enjoy the benefits while managing risks appropriately.

Whilst ICO’s comments do not necessarily reflect tension between the OSA and data protection regimes as such, a difference of emphasis is detectable.

Generally, critics of the Act have long argued that requiring service providers to assess illegality is a recipe for arbitrary decision-making and over-removal of legal user content, likely to constitute an unjustified interference with the right of freedom of expression.  As Ofcom and the ICO attempt to get to grips with the practical realities of the duties, data protection and privacy are now joining the fray.


Friday, 20 September 2024

Public order: from street protest to the Online Safety Act

Assiduous readers of this blog will know of my fondness for working through concrete examples to illustrate how, once they come into force (now likely to be in Spring next year), platform illegal content duties under the UK Online Safety Act 2023 (OSA) might pan out in practice.

A recurring theme has been that making judgements about the legality or illegality of user content, as platforms are required to do by the OSA, is not a simple matter. The task verges at times on the impossible: platforms are required to make complex legal and factual judgements on incomplete information. Moreover, the OSA stipulates a relatively low threshold for a platform to conclude that content is illegal: reasonable grounds to infer. The combined result is that the OSA regime is likely to foster arbitrary decisions and over-takedown of legal user content.

The newest opportunity to hypothesise a concrete example is presented by the acquittal of Marieha Hussain, who was charged with a racially aggravated public order offence for carrying, at a pro-Palestine demonstration, a placard depicting Rishi Sunak and Suella Braverman as coconuts.  The prosecution alleged that this was a well-known racial slur. The district judge held that it was part of the genre of political satire, and that the prosecution had not proved to the criminal standard that it was abusive.

Ms Hussain was prosecuted for an offence in a public street, to which the Online Safety Act would not directly apply. However, what if an image of the placard appeared online? If displaying the placard in the street was sufficient to attract a criminal prosecution, even if ultimately unsuccessful, could the OSA (had it been in force) have required a platform to take action over an image of the placard displayed online? 

As it happens the prosecution in Marieha Hussain’s case was prompted by someone posting a photograph of the placard online, accompanied by a critical comment. That was followed by a response from the Metropolitan Police, who were tagged in the post:

 


If the Online Safety Act duties were in force (and assuming that the court had not yet delivered its acquittal verdict), how would a service provider have to go about deciding whether an online post of a photograph of the placard should be treated as illegal? How would that differ from the court process? Could the differences lead a service provider to conclude that a post containing an image of the placard should be removed? Could (or should) the fact that a prosecution had been instigated for display of the placard in the street, or (before that) that the police had indicated an interest, affect the platform’s illegality judgement?

The prosecution

As far as can be understood from the press reports, Ms Hussain was prosecuted for a racially aggravated offence under Section 5 of the Public Order Act 1986. The Section 5 offence (so far as relevant to this example) is:

“(1) A person is guilty of an offence if he—

(a) uses… abusive words or behaviour…, or

(b) displays any writing, sign or other visible representation which is… abusive,

within the hearing or sight of a person likely to be caused harassment, alarm or distress thereby.

(2) An offence under this section may be committed in a public or a private place, except that no offence is committed where the words or behaviour are used, or the writing, sign or other visible representation is displayed, by a person inside a dwelling and the other person is also inside that or another dwelling.

(3) It is a defence for the accused to prove—

(a) that he had no reason to believe that there was any person within hearing or sight who was likely to be caused harassment, alarm or distress, or

(b) that he was inside a dwelling and had no reason to believe that the words or behaviour used, or the writing, sign or other visible representation displayed, would be heard or seen by a person outside that or any other dwelling, or

(c) that his conduct was reasonable.

Additionally, someone is guilty of the offence only if they intend their words or behaviour, or the writing, sign or other visible representation, to be… abusive, or are aware that it may be… abusive.

The racially aggravated version of the offence (which carries a larger fine) applies if the basic offence is committed and:

“(a) at the time of committing the offence, or immediately before or after doing so, the offender demonstrates towards the victim of the offence hostility based on the victim’s membership (or presumed membership) of a racial …  group; or

(b) the offence is motivated (wholly or partly) by hostility towards members of a racial…  group based on their membership of that group.”

The ‘victim’ for the purpose of (a) is the person likely to be caused harassment, alarm or distress.

Both offences are triable only in the magistrates’ court. If the defendant is acquitted of the racially aggravated offence the court may go on to consider the basic offence, but only if it is charged in the alternative (which the CPS Charging Guidance says it should be).

Priority offences

Both the basic offence under Section 5 and the racially aggravated version are within the scope of the Online Safety Act. They are listed in Schedule 7 as ‘priority offences’. As such, not only is a service provider required swiftly to take down illegal content if it becomes aware of it (OSA Section 10(3)(b)), but it may be required to take proportionate proactive prevention measures (OSA Section 10(2)(a)).

The Section 5 offence attracted attention during the Online Safety Bill’s passage through Parliament. On 19 May 2022 the Chair of the Joint Parliamentary Committee on Human Rights, Harriet Harman MP, wrote to the then Secretary of State, Nadine Dorries. She said:

“It is hard to see how providers, and particularly automated responses, will be able to determine whether content on their services fall on the legal or illegal side of this definition”.

She went on:

“…how will a provider of user-to-user services judge whether particular words or behaviour online are “abusive” rather than merely offensive and whether or not they are likely to cause someone “distress” sufficient to amount to a criminal offence?”

and

“Will the inclusion of section 5 Public Order Act 1986 within the category of priority illegal content, in practice, result in service providers removing content that does not meet the criminal threshold, potentially resulting in an interference with the Article 10 rights of users?”

The DCMS Minister, Chris Philp MP, replied on 16 June 2022. In response to the specific questions about Section 5 he recited the general provisions of the Bill.

JUSTICE, in its Lords Second Reading Briefing, elaborated on the concerns of the Joint Human Rights Committee and called for Section 5 to be removed from the category of priority illegal content. That did not happen.

So far, so clear. Now the picture starts to get foggy, for a variety of reasons.

Making an Illegal Content Judgement

First, is either version of the Section 5 offence capable of applying online at all? Inclusion of the Section 5 offence in Schedule 7 is not conclusive that it can be committed online. The reason for inclusion of offline offences is that, in principle, it is possible to encourage or assist online an offence that can only be committed offline. Such inchoate offences (plus conspiracy, aiding and abetting) are also designated as priority offences. (Parenthetically, applying the inchoate offences to online posts presents its own problems in practice – see here.)

One potential obstacle to applying the Section 5 offences online is the requirement that the use or display be: “within the hearing or sight of a person likely to be caused harassment, alarm or distress thereby”. Does this require physical presence, or is online audibility or visibility sufficient? If the latter, must the defendant and the victim (i.e. the person likely to be caused harassment, alarm or distress) be online simultaneously? The Law Commission considered the simultaneity point in its consultation on Modernising Communications Offences, concluding that the point was not clear.

Ofcom, in its draft Illegal Content Judgements Guidance, does not address the question expressly. It appears to assume that the “within hearing or sight” condition can be satisfied online. That may be right. But it is perhaps unfortunate that the Act provides no mechanism for obtaining an authoritative determination from the court on a point of law this kind.

Second, which offence should be considered? CPS practice is to charge the more serious racially aggravated offence if there is credible evidence to prove it. Under the Online Safety Act, the opposite applies: the simpler, less serious offence should be the one adjudged. The Ofcom consultation documents explain why:

“In theory, in order to identify a racially aggravated offence, the service would not only need to identify all the elements of the Public Order Act offence, but also all the elements of racial or religious aggravation. But in practice, in order to identify the content as illegal content, the service would only need to show the elements of the underlying Public Order Act priority offence, because that would be all that was needed for the takedown duty to be triggered. The racial aggravation would of course be likely to make the case more serious and urgent, but that would be more a matter of prioritisation of content for review than of identifying illegal content.” [26.81]

Third, how strong does the evidence of an offence have to be?

In court, a criminal offence has to be proved beyond reasonable doubt. The district judge in the Hussain case concluded that the placard was: “part of the genre of political satire” and that as such, the prosecution had “not proved to the criminal standard that it was abusive”. The prosecution had also not proved to the criminal standard that the defendant was aware that the placard may be abusive. The court reached those decisions after a two day trial, including evidence from two academic expert witnesses called by the defence to opine on the meaning of ‘coconut’.

A service provider, however, must treat user content as illegal if it has “reasonable grounds to infer” that it is illegal. That is a lower threshold than the criminal standard.

Could that judgement be affected by the commencement of a criminal prosecution? The Director of Public Prosecutions’ Charging Guidance says that for a criminal prosecution to be brought the prosecutor: “must be satisfied that there is sufficient evidence to provide a realistic prospect of conviction…” It must be “more likely than not” that “an objective, impartial and reasonable jury, bench of magistrates or a judge hearing a case alone, properly directed and acting in accordance with the law, would convict the defendant of the charge alleged.”

Whether “reasonable grounds to infer” is a lower threshold than the “more likely than not to convict” Charging Guidance test for commencing a prosecution is a question that may merit exploration. If (as seems likely) it is lower, or even if it is just on a par, then a platform could perhaps be influenced by the fact that a prosecution had been commenced, in the light of the evidential threshold for that to occur. However, it does not follow from commencement of a prosecution for a street display that the charging threshold would necessarily be surmounted for an online post by a different person.

The more fundamental issue is that the lower the service provider threshold, the more likely that legal content will be removed and the more likely that the regime will be non-compliant with the ECHR. The JUSTICE House of Lords briefing considered that ‘reasonable grounds to infer’ was a ‘low bar’, and argued that provisions which encourage an overly risk-averse approach to content removal, resulting in legitimate content being removed, may fall short of the UK’s obligations under the ECHR.  

The Ofcom consultaion observes:

“What amounts to reasonable grounds to infer in any given instance will necessarily depend on the nature and context of the content being judged and, particularly, the offence(s) that may be applicable.” [26.15]

The significance of context is discussed below. Notably, the context relevant to criminal liability for a street display of a placard may be different from that of an online post of an image of the placard by a third party.

The service provider’s illegal content judgement must also be made on the basis of “all relevant information that is reasonably available” to it. Self-evidently, a service provider making a judgement about a user post would not have the benefit of two days’ factual and expert evidence and accompanying legal argument, such as was available to the court in the Hussain prosecution. The question of what information should be regarded as reasonably available to a service provider is a knotty one, implicating data protection law as well as the terms of the OSA. Ofcom discusses this issue in its Illegal Harms consultation, as does the Information Commissioner’s Office in its submission to the Ofcom consultation. The ICO also touches on it in its Content Moderation Guidance.

In order for the Section 10(3)(b) swift takedown obligation to be triggered, the service provider must have become aware of the illegal content. Ofcom’s consultation documents implicitly suggest that the awareness threshold is the same as having reasonable grounds to infer illegality under Section 192. That equation is not necessarily as clear-cut as might be assumed (discussed here).

Fourth, whose awareness?

Ms Hussain’s placard was held not to be abusive. The court also held that she did not have the necessary awareness that the placard may be abusive. A service provider faced with an online post of an image of a placard would have to consider whether it had reasonable grounds for an inference that the placard was abusive and that the person who posted it (rather than the placard bearer) had the necessary awareness.

When it comes at least to reposting, Professor Lorna Woods, in her comments on the Ofcom Illegal Content Judgements Guidance, has argued that a requirement to evaluate the elements of an offence for each person who posts content is too narrow an interpretation of the OSA:

“The illegal content safety duties are triggered by content linked to a criminal offence, not by a requirement that a criminal offence has taken place. … The requirement for reasonable grounds to infer a criminal offence each time content is posted … presents an overly restrictive interpretation of relevant content. Such a narrow perspective is not mandated by the language of section 59, which necessitates the existence of a link at some stage, rather than in relation to each individual user. … There is no obligation to look at the mental state of each individual disseminator of the content”

Professor Woods gives as an example the reposting of intimate images without consent.

S.59 (which defines illegal content) has expressly to be read together with S.192 (illegal content judgements). S.192, at first sight, reads like an instruction manual for making a judgement in relation to each individual posting. Be that as it may, if Professor Woods’ argument is correct it seems likely for many kinds of offence (even if not for the intimate images offence) to reintroduce the problems that the Independent Reviewer of Terrorism Legislation identified with S.59 (then Clause 52). The Bill was subsequently amended to add S.192, it is assumed in response to his criticisms:

“2. ...Intention, and the absence of any defence, lie at the heart of terrorism offending. ... 

16. The definition of “terrorism content” in clause 52(5) is novel because under terrorism legislation content itself can never “amount to” an offence. The commission of offences requires conduct by a person or people.

17. Clause 52(3) attempts to address this by requiring the reader of the Bill to consider content in conjunction with certain specified conduct: use, possession, viewing, accessing, publication or dissemination.

18. However, as Table 1 shows, conduct is rarely sufficient on its own to “amount to” or “constitute” a terrorism offence. It must ordinarily be accompanied by a mental element and/or take place in the absence of a defence. …

23. … It cannot be the case that where content is published etc. which might result in a terrorist offence being committed, it should be assumed that the mental element is present, and that no defence is available.

24. Otherwise, much lawful content online would “amount to” a terrorist offence.”

My own subsequent submission to the Public Bill Committee analysed Clause 52, citing the Independent Terrorism Reviewer's comments, and concluded in similar vein:

"Depending on its interpretation the Bill appears either:

6.21.1 to exclude from consideration essential ingredients of the relevant criminal offences, thereby broadening the offences to the point of arbitrariness and/or disproportionate interference with legitimate content; or

6.21.2 to require arbitrary assumptions to be made about those essential ingredients, with similar consequences for legitimate content; or

6.21.3 to require the existence of those ingredients to be adjudged, in circumstances where extrinsic factual material pertaining to those ingredients cannot be available to a filtering system.

In each case the result is arbitrariness (or impossibility), significant collateral damage to legal content, or both.”

An interpretation of the OSA that increases the likelihood of lawful content being filtered or taken down also increases concomitantly the risk of ECHR incompatibility. (See also, ‘Item by Item Judgements’ below)

On a different point, Ofcom appears to suggest that the wider and more general the audience for a controversial post, the greater the likelihood of awareness being inferred:

“A service must also draw an inference that the person posting the content concerned was at least aware that their behaviour may be abusive. Such awareness may reasonably be inferred if the abusive behaviour is very obviously likely to be distressing to most people and is posted somewhere with wide reach.” [A3.77]

In contrast:

“It is less likely to be reasonably inferred if content is posted to a place where, for example, only persons sharing similar sorts of content themselves are likely to see it.” [A3.77] 

Fifth, any defence?

As to the Section 5 defence of reasonable conduct, the district judge said that had it been necessary to go that far, she would have found Ms Hussain's conduct to be reasonable in that she was exercising her right to freedom of expression, and the judge would not have been satisfied that the prosecution was a proportionate interference with her right, or necessary in a democratic society. 

Our hypothetical assumes that no court ruling has been made. If the service provider has concluded that there are reasonable grounds to infer abusive content and awareness, how should it evaluate the possibility of a defence such as reasonable conduct?

When making an illegal content judgement a service provider can only base a judgement on the availability of a defence if it positively has some reason to infer that a defence to the offence may be successfully relied upon. That is the effect of OSA S.192(6)(b):

(6) Reasonable grounds for that inference exist in relation to content and an offence if … a provider—

(a) has reasonable grounds to infer that all elements necessary for the commission of the offence, including mental elements, are present or satisfied, and

(b) does not have reasonable grounds to infer that a defence to the offence maybe successfully relied upon.

An obvious instance of positive grounds to infer a Section 5 reasonable conduct defence on the part of the poster would be a comment added to the image.

In a different context (terrorism), Ofcom has reached the same conclusion as to the need for positive grounds:

“There is a defence of ‘reasonable excuse’ which may be harder for services to make reasonable inferences about, but they only need to consider it if there are positive grounds to do so.” [26.93]

Similarly, for the offence of stirring up racial hatred:

“In cases where there are no reasonable grounds to infer intent it is a defence for a person to show that he was not aware that the content might be insulting or abusive. However, positive grounds to infer this would need to be available to the service.” [A3.90]

As to the Section 5 “reasonable conduct” defence, a service provider hypothetically considering the original online post of the Marieha Hussain placard in the absence of a court judgment would have to consider whether, if it considered that there were reasonable grounds to infer that the placard was abusive and that the post satisfied the other elements of the offence, the comment by the poster (in addition to anything inferrable from the nature of the posted image) provided reasonable grounds to infer that a defence of reasonable conduct might be successfully relied upon. 

It might also be relevant to consider whether there were reasonable grounds to infer that the original placard holder could have have a reasonable conduct defence for the street display, as the judge in the Hussain case held that she would have done. However, the defence is specific to the conduct of each defendant, not a finding about the nature of the content. 

As the judge's remarks demonstrate, consideration of the reasonable conduct defence can result in the service provider making judgements about the necessity and proportionality of the interference with freedom of expression. 

Ofcom’s Illegal Content Judgements Guidance says:

“Services should take a common-sense approach to considering whether the behaviour displayed in the content could be considered reasonable. For example, it may be reasonable (even if unwise) to abuse someone in response to abuse.” [A3.68]

Common sense also comes to the aid of the harassment and distress element of the Section 5 offence:

“Services should consider any information they hold about what any complainant has said about the emotional impact of the content in question, and take a common-sense approach about whether it is likely to cause harassment or distress.” [A3.27]

Appeals to common sense bring to mind the Oxford Reference definition of palm tree justice: 

“Ad hoc legal decision-making, the judge metaphorically sitting under a tree to make rulings based on common sense rather than legal principles or rules.”

The perceived value of guidance based on common sense may also depend on whether one shares the William O. Douglas view that ‘Common sense often makes good law’ or that of Albert Einstein: “Common sense is the collection of prejudices acquired by age eighteen”.

In addition to reasonable conduct, Section 5 of the Public Order Act provides a defence “that he had no reason to believe that there was any person within hearing or sight who was likely to be caused harassment, alarm or distress”.

Ofcom suggests that a post that is legal may be rendered illegal through the poster being deprived of the defence as the result of a notification:

“it is a defence if it is reasonable to infer that the person had no reason to believe that there was any person within hearing or sight who was likely to be caused harassment or distress. This is most likely to be relevant where a user is challenging a takedown decision (but of course if the person becomes aware as a result of the takedown decision that such a person was within hearing or sight, the content would become illegal content).” [A3.33]

That and Ofcom’s comment on the relationship between awareness and wide reach are both reminiscent of the concerns about the “harmful communications” offence that was originally included in the Bill, then dropped.

Sixth, what is the significance of context? The Hussain decision appears to have turned on the court’s finding of what was ‘abusive’ in the context of the display of the placard (albeit that the racially aggravated element of the alleged offence inevitably focused attention on whether the placard was specifically racially abusive).

The Ofcom Illegal Judgments Guidance on the Section 5 offence emphasises the significance of context:

“However, the context should be taken into account carefully, since abusive content may also carry political or religious meaning, and will be more likely to be a reasonable exercise of the right to freedom of expression if it is.” [A3.79]

While some of the context available to a service provider may be the same as that available to a court (for instance it is apparent on the face of the image of the Hussein placard that it was a political comment), much of the available context may be different: different person, different place, different audience, additional comments, no expert witnesses. Add to that a different standard of proof and a different statutory framework within which to judge illegality, and the possibility of a different (most likely more restrictive) conclusion on legality from that which a court would reach (even if considering the same version of the offence) is significant.

The last word on context should perhaps go to Ofcom, in its Illegal Content Judgements Guidance on Section 5:

“We have not given any usage examples here, due to the particularly strong importance of context to these judgements.” [A3.81]

Item by item judgements?

While some may argue that the OSA is about systems and processes, not content, there is no doubt (pace Professor Woods’ argument noted above) that at least some of its illegality duties require platforms to make item by item content judgements (see discussion here). The duties do not, from a supervision and enforcement point of view, require a service provider to get every individual judgement right. They do require service providers to make individual content judgements.

Ofcom evidently expects service providers to make item by item judgements on particular content, while noting that the function of the online safety regime is different from that of a court:

“The ‘beyond reasonable doubt’ threshold is a finding that only UK courts can reach. When the ‘beyond reasonable doubt’ threshold is found in UK courts, the person(s) responsible for the relevant illegal activity will face criminal conviction. However, when services have established ‘reasonable ground to infer’ that content is illegal according to the Act, this does not mean that the user will necessarily face any criminal liability for the content and nor is it necessary that any user has been prosecuted or convicted of a criminal offence in respect of such content. When services make an illegal content judgement in relation to particular content and have reasonable grounds to infer that the content is illegal, the content must however be taken down.” [26.14]

Critics of the OSA illegality duty have always doubted the feasibility or appropriateness of requiring platforms to make individual content legality judgements, especially at scale. Those coming at it from a freedom of expression perspective emphasise the likelihood of arbitrary judgements, over-removal of legal content and consequent incompatibility with the European Convention on Human Rights.

The ‘systems and processes’ school of thought generally advocates harm mitigation measures (ideally content-agnostic) in preference to item-by-item content judgements. Relatedly, the Online Safety Network recently suggested in a Bluesky post that “the government needs to amend the Act to make clear that - once content has been found to be illegal content – it should continue to be categorised that way”. That would reduce the need for successive item-by-item illegality judgements in relation to the same content, and would make explicit what Professor Woods has argued is already the proper interpretation of the Act (see above).

The comments of the Online Safety Network were made in the specific context of the non-consensual intimate image offence. For offences where the gravamen lies in the specific nature of the prohibited content, and the role of any mental element, other condition or defence is secondary (such as ensuring only that accidental behaviour is not criminalised), there may be some force in the suggestion that the same content should always be treated in the same way (at least if the initial finding of illegality has been verified to a high standard). Ofcom’s proposed CSAM image filtering duties, for instance, would operate on that basis.

Elevated to a general principle, however, the suggestion becomes problematic. For offences where the conduct element is broad or vague (such as the Section 5 offence), or where context is significant, or where the heavy lifting of keeping the offence within proper bounds is done by the mental element or by defences, it would be overreaching (and at serious risk of ECHR incompatibility) automatically to deem the same item of content to be illegal regardless of context, intention or of any other factors relevant to illegality. In the terrorism field filtering algorithms have had trouble distinguishing between illegal terrorist content and legal news reports of the same content. To deem that content always to be illegal for the purpose of filtering and takedown duties would be controversial, to say the least.

The Online Safety Network went on to comment that “the purpose of the regime is not to punish the person sharing the content, but to control the flow of that content.” It is true that the safety duties do not of themselves result in criminal liability of the user. But “don’t worry, we’re only going to suppress what you say” does not feel like the most persuasive argument for an interference with lawful freedom of expression.

[The original version of this post stated: "Since Ms Hussain’s placard was held not to be abusive, it appears that the magistrates’ court did not rule on any available defences." Now updated, with some consequential additions to the discussion of the reasonable conduct defence, in the light of Professor Augustine John's fuller account of the judge's ruling. [21 September 2024)