Wednesday 24 July 2024

The Online Safety Act: proactive illegality duties, safeguards and proportionality

Part 4 of a short series of reflections on Ofcom’s Illegal Harms consultation under the Online Safety Act 2023 (OSA). 

A significant proportion of the consultation’s discussion of Ofcom's proposed Code of Practice recommendations — especially those involving proactive monitoring and detection of illegal content — is taken up with enumerating and evaluating safeguards to accompany each recommended measure.

That is to be expected, for two reasons. First, the OSA itself provides in Schedule 4 that measures recommended in a Code of Practice must be designed in the light of the importance of protecting the privacy of users and the right of users to freedom of expression within the law, and (where appropriate) incorporate safeguards for the protection of those matters.

Second, the potential interference with users' fundamental rights (notably freedom of expression and privacy) brings into play the European Convention on Human Rights (ECHR) and the Human Rights Act (which, following the UK's recent general election, we can assume will be with us for the foreseeable future).

The first step in the ECHR analysis is to consider whether the interference is “prescribed by law”. This is a threshold condition: if the interference fails that test, it is the end of the matter. When considering whether an interference contained in a statute is prescribed by law, it is not enough that the law has been passed by Parliament and is publicly accessible. It also has to have the “quality of law”: it must be sufficiently clear and precise that someone potentially affected by it can foresee in advance, with reasonable certainty, how the law will apply to them.

Requirements (strictly speaking, in the case of the OSA, Ofcom recommendations) for automated proactive detection, filtering and removal of user content present a particularly high risk of arbitrary interference with, and over-removal of, legal content. They can also be seen as a species of prior restraint. The European Court of Human Rights observed in Yildirim that "the dangers inherent in prior restraints are such that they call for the most careful scrutiny on the part of the Court".

Compatibility with the ECHR operates at two levels: the legislative measure and individual decisions taken under it. A court will regard itself as well placed, since it has the facts to hand, to determine whether an individual decision is or is not a justified interference with a Convention right. It is far less willing to declare a legislative measure per se incompatible, unless it is clear that when applied in practice it will result in a breach of Convention rights in most or all cases. If the measure is capable of being operated in a way that does not breach the Convention, then it will not be per se incompatible.

However, there is an important rider: the UK courts have said that in order to protect against arbitrary interference there must be safeguards which have the effect of enabling the proportionality of the interference to be adequately examined.

In the case of legislation such as the OSA, where the Act frames the duties at a very high level and a regulator is authorised to flesh them out, the necessary safeguards have to be provided in Ofcom's Codes of Practice and its statutory guidance. If such safeguards are not provided, or if they are not sufficient, then the regime will fall at the first Convention hurdle of not being prescribed by law. The ECHR compatibility of the regime on this score is thus heavily dependent on Ofcom's work product.

Much judicial ink has been expended on explaining the precise underlying rationale for the “capable of being adequately examined" test. It is safest to regard it as an aspect of the prescribed by law (a.k.a. “legality”) principle: the reason why legislation must be reasonably clear and precise is in order to prevent arbitrariness and the abuse of imprecise rules or unfettered discretionary powers. If the impact of the scheme is foreseeable, then its proportionality is capable of being assessed. If its impact across the board is not discernible, then its impact will be arbitrary.

Lady Hale said in the Supreme Court case of Gallagher:

“The foundation of the principle of legality is the rule of law itself - that people are to be governed by laws not men. They must not be subjected to the arbitrary - that is, the unprincipled, whimsical or inconsistent - decisions of those in power. 

This means, first, that the law must be adequately accessible and ascertainable, so that people can know what it is; and second, that it must be sufficiently precise to enable a person - with legal advice if necessary - to regulate his conduct accordingly. The law will not be sufficiently predictable if it is too broad, too imprecise or confers an unfettered discretion on those in power. 

This is a separate question from whether the law in question constitutes a disproportionate interference with a Convention right -the law in question must contain safeguards which enable the proportionality of the interference to be adequately examined. 

This does not mean that the law in question has to contain a mechanism for the review of decisions in every individual case: it means only that it has to be possible to examine both the law itself and the decisions made under it, to see whether they pass the test of being necessary in a democratic society.”

In the final analysis it may be said that safeguards have to provide sufficient protection against arbitrariness.

The courts have stressed that challenging an entire regime ex ante on proportionality grounds presents a high hurdle and will rarely succeed, compared with a challenge by an individual who claims that their rights have been violated in a particular instance. Nevertheless, the safeguards proposed by Ofcom have to pass the prescribed by law test. If they do pass, then the actual proportionality of a given interference can be considered should a case arise.

The impact of the legality requirement and the nature of the required safeguards have to be considered in the light of the triangular structure of the Online Safety Act regime. We are not here dealing with a discretionary power vested in a state official to direct a user to take down their post. The OSA regime places legal obligations on intermediary service providers. The steps that they take to comply with those obligations have the potential to affect users' rights, particularly freedom of expression. 

Foreseeability requires that a user should be able to predict, with reasonable certainty, whether their contemplated online post is liable to be affected by actions taken by a service provider in discharging its obligations under the Act.

The safeguards stipulated by Ofcom should therefore provide the requisite degree of predictability for users in respect of blocking and removal actions to be taken by service providers when carrying out Ofcom's recommended measures.

As regards the consultation’s general approach to ECHR compliance, two points stand out. The first is that there is virtually no discussion of the “prescribed by law” requirement. Its existence is recited in many places, but the substantive discussion of ECHR compatibility proceeds directly to discussion of legitimate aim, necessity and proportionality of the recommended measures. Para 1.14 of the consultation may provide a clue as to why that is:

“In passing the Act, Parliament has set out in legislation the interferences prescribed by law and which it has judged to be necessary in our democratic society.”

Similarly in para 12.64:

“…our starting point is that Parliament has determined that services should take proportionate steps to protect UK users from illegal content. Of course there is some risk of error in them doing this, but that risk is inherent in the scheme of the Act.”

There is possibly a hint here of regarding the fact that Parliament has passed legislation as being sufficient of itself to satisfy the “prescribed by law” requirement. That may be the starting point, but it is not the end point.

The second point is that insofar as Ofcom has focused on the need for clarity and certainty, it has done so from the perspective of providing clarity to service providers. The Act requires this. Schedule 4 provides that the measures described in a Code of Practice must be:

“sufficiently clear, and at a sufficiently detailed level, that providers understand what those measures entail in practice;”

That, however, does not detract from the ECHR requirement that the potential for interference with users’ privacy and freedom of expression must also be reasonably clear and precise.

The two requirements do not necessarily go hand in hand. A provision may be clear as to the amount of discretion that it gives to a service provider, yet unforeseeable in its effect on the freedom of expression of users.

Several aspects of Ofcom's proposed safeguards in relation to automated detection and related takedowns give pause for thought on the question of capability to assess the proportionality of the interference. The recommendations (which would apply only to some services) are: 

  • Perceptual hash matching against a database of known CSAM material (draft U2U Code of Practice, A4.23)
  • URL matching against a list of known CSAM URLs (draft U2U Code of Practice, A4.37)
  • Fuzzy keyword matching to detect articles for use in fraud (draft U2U Code of Practice, A4.45)

The concerns are most apparent with the fraud keyword proposal, albeit they are not entirely absent with CSAM hash and URL matching. 

URL matching presents the fewest challenges. Ofcom's proposed safeguards relate entirely to the process for establishing and securing the list of URLs. They provide that the service provider should source the list from: 

“a person with expertise in the identification of CSAM, and who has arrangements in place to [inter alia] secure (so far as possible) that URLs at which CSAM is present, and domains which are entirely or predominantly dedicated to CSAM, are correctly identified before they are added to the list; to review CSAM URLs on the list, and remove any which are no longer CSAM URLs” [draft Code of Practice, A4.40]

By way of further safeguards, both the person with expertise and the service provider should secure the list from unauthorised access, interference or exploitation (whether by persons who work for the provider or are providing a service to the provider, or any other person).

The reasonable assumption is that the technology is capable of accurately matching detected URLs with the list, such that no further safeguards are required on that score.

If there were any concern about adequacy of these safeguards, it would probably be whether "a person with expertise in the identification of CSAM" is sufficiently precisely articulated.

For CSAM hash matching the draft Code of Practice contains equivalent safeguards to URL matching for establishment and security of the hash database. However, further safeguards are required since the recommendation of perceptual hashing introduces an element of judgement into the matching process, with the concomitant risk of false positives and consequent blocking or removal of legal user content.

Here the adequacy of the proposed safeguards may be open to more serious debate. The draft Code of Practice states that the perceptual hashing technology should be configured so that its performance strikes "an appropriate balance between precision and recall".

Precision and recall refers to the incidence of false positives and missed hits. There is typically a trade-off: fewer missed hits means more false positives.  As to what is an appropriate balance between them, the draft Code of Practice stipulates that the provider should ensure that the following matters are taken into account: 

- The risk of harm relating to image-based CSAM, as identified in the risk assessment of the service, and including in particular information reasonably available to the provider about the prevalence of CSAM content on its service.

- The proportion of detected content that is a false positive; and

- The effectiveness of the systems and/or processes used to identify false positives.

Annex 15 to the Consultation suggests various further factors that could point towards striking the balance towards either precision or recall.

The draft Code of Practice stipulates that human moderators should review “an appropriate proportion” of material detected as CSAM, and sets out principles that the service provider should take into account in deciding what proportion of detected content it is appropriate to review - for instance that the resource dedicated to review of detected content should be proportionate to the degree of accuracy achieved by the perceptual hash matching technology. It also provides various periodic review and record-keeping recommendations.

Annex 15 sets out Ofcom’s reasons (related to differences between perceptual hash technologies) for not setting a threshold which should be used to determine whether an image is a match.

The substantive balancing and proportionality decisions are thus parked firmly on the desk of the service provider. However, neither the draft Code of Practice nor the Act itself contains any indication of what is to be regarded as a proportionate or disproportionate level of interference with legal user content.

The result is that two different service providers could readily apply the stipulated safeguards in equivalent factual situations, follow the prescribed process and reach significantly differing conclusions about what is an appropriate balance between precision and recall, or about what resource should be devoted to human review. Consequently it can be argued that the effect on user content cannot be predicted. That smacks of arbitrariness. 

The safeguards for fuzzy keyword detection of articles for use in fraud are more extensive, as would be expected for a technology that is inherently more likely to throw up false positives. The consultation document points out that the recommendation:

"...differs from our proposed measures regarding CSAM hashing and the detection of CSEA links which focus on the detection of positive matches with content (or URLs that provide access to content) that has already been determined to be illegal." [Annex 15, A15.121]

Unlike with CSAM URL and hash matching the draft Code of Practice envisages that the service provider may compile its own list of fraud keywords. It contains safeguards around establishment, testing, review and security of the list. It contains equivalent provisions to perceptual hash matching for configuration of the technology so as to strike “an appropriate balance between precision and recall”, stipulating equivalent matters to be taken into account. Ofcom envisages that the safeguards will mean that it will be ‘highly likely’ that a keyword hit will correspond to an offence:

“In light of the above, we would expect any content detected as a result of applying this [keyword technology] measure to be highly likely to amount to an offence concerning articles for use in frauds.” [Volume 4, para 14.249]

It goes on:

“We recognise however that the keyword detection measure we are considering will enable services to identify content about which no prior illegal content judgment or determination has been made and that it may result in false positives. It may identify legitimate content (such as news articles or academic articles) which discuss the supply of articles for use in fraud. It is for this reason that we are not recommending that services take down all content detected by the technology, and are instead recommending that it be considered by services in accordance with their internal content moderation policies.” [ibid]

As with perceptual hash matching the draft Code of Practice provides for after the event periodic human review of some detected content. Whereas for perceptual hash matching this has to be ‘an appropriate proportion’, for fraud detection it has to be ‘a reasonable sample’. Again, it sets out principles to be taken into account in deciding what is a reasonable sample. These bear some similarities to, but are not identical to, those for perceptual hash matching. For instance there is no stipulation that review resource should be proportionate to the degree of accuracy achieved by the technology.

Evaluating the adequacy of the fraud keyword safeguards is complicated by the latitude that the recommendations give service providers as to what kind of action to take following initial keyword detection, and possible statutory interpretation questions as to whether (and if so in what way) the illegality judgement provisions of S.179 and the swift takedown obligations of S.10(3)(b) apply. 

Ofcom's approach is summarised thus:

"... we do not consider it appropriate to recommend that services swiftly take down all content detected as a positive match by their keyword detection technology, instead we recommend (as discussed below) that the decision on whether or not the content should be taken down should be taken in accordance with their content moderation systems and processes." [Annex 15, A15.122]

This is consistent with Ofcom’s broader policy approach to content moderation:

“Given the diverse range of services in scope of the new regulations, a one-size-fits-all approach to content moderation would not be appropriate. Instead of making very specific and prescriptive proposals about content moderation, we are therefore consulting on a relatively high-level set of recommendations which would allow services considerable flexibility about how to set up their content moderation teams.” [Volume 4, p.18]

 Ofcom continues, in relation to its fraud keyword recommendations:

“Consistent with Chapter 12, we are not persuaded that it would be appropriate to specify in detail how services should configure their content moderation systems and processes to take account of content detected by the keyword detection technology (for example, that there be human moderation of all such content), or the outcomes that those systems and processes should achieve (for example, through detailed KPIs).” [Annex 15, A15.123]

It then says:

“We are proposing in that Chapter that all U2U service providers must have in place content moderation systems or processes designed to take down illegal content swiftly.” [Annex 15, A15.124]

The area in which the keyword recommendations depart most significantly from hash and URL matching is thus in the steps to be taken in respect of positive keyword matches: treating them in accordance with the service provider’s internal content moderation systems and processes.  Ofcom’s approach is not to be prescriptive but to give service providers broad latitude in what steps to take in respect of positive keyword matches.

There is, however, an underlying dilemma. There are significant costs and risks associated with being prescriptive: the interference with a platform’s own rights (e.g under ECHR Protocol 1, Article 1), the unlikelihood that a single size of straitjacket can fit all in-scope service providers, prejudicing existing services, the chilling or dampening effect on development of new services, and the greater likelihood that faced with a prescriptive requirement service providers will take an over-cautious approach to blocking and removals. 

Yet the less prescriptive the measures, the broader the range of permissible approaches, the less predictable the effect on users and the greater the likelihood of arbitrary interference with user rights. This dilemma is not of Ofcom’s making. It is hardwired into the Act, but it falls to Ofcom to resolve it. It is an unenviable task. It may be impossible.

Specifically in relation to the fraud keyword detection recommendation, Ofcom says:

"... Implementations that substantially impact on freedom of expression, including the automatic take down of detected content, could be in accordance with the measure in our Code of Practice.” [Chapter 14, para 14.283]

and:

"whether or not such content were, incorrectly, subject to takedown would depend on the approach, to content moderation adopted by the service, rather than the content's detection by the keyword detection technology in and of itself." [Chapter 14, paras 14.284, 14.302]

Ofcom acknowledges that:

“There could therefore be variation in the impact on users’ freedom of expression arising from services’ different implementations of the technology and different approaches to moderation and take down of any detected content.” [para 14.283]

Ofcom, does not, however, discuss the implications for the “capable of being adequately examined" requirement if those variations are insufficiently foreseeable.

The discussion in Annex 15 contemplates that a service provider might have “no systems and processes in place to identify false positives before content is taken down”. That, it is said, would be a factor leaning towards configuring the system to towards greater precision at the expense of recall.

Recommended safeguards for content moderation generally include setting of performance targets, as they relate to accuracy of decision-making; training and materials; and appeals. For performance targets, it is for the service provider to balance the desirability of taking illegal content down swiftly against the desirability of making accurate moderation decisions. As above, different service providers could apply that guidance yet reach significantly different conclusions.

In the context of proportionality Ofcom seeks to diminish the impact on users’ freedom of expression by exempting news publisher content from the fraud keyword matching recommendation (reflecting the Act's exclusion of such content from regulated U2U content). However, that prompts the question of how service providers are to distinguish between news publisher content and the rest, in the context of an automated system: something which raises its own safeguards issues.

Ofcom’s fraud keywords recommendation cross-refers to its Recommendation 4B for large or multi-risk services: that the provider should set and record (but need not necessarily publish) internal content policies setting out rules, standards and guidelines around what content is allowed and what is not, and how policies should be operationalised and enforced. The policies should be drafted such that illegal content (where identifiable as such) is not permitted.    

Recommendation 4A (which is stated not to apply to CSAM perceptual hash and URL matching, but does not exclude fraud keyword detection) also appears potentially relevant to the fraud keyword matching recommendation: the service provider should have systems or processes designed to swiftly take down illegal content of which it is aware (mirroring the statutory obligation in S.10(3)).

Recommendation 4A goes on that for that purpose when the provider has reason to suspect that content may be illegal content, the provider should either make an illegal content judgement in relation to the content and, if it determines that the content is illegal, swiftly take it down; or do the same where its terms of service prohibit the type of illegal content in question and the content is in breach. 

Ofcom comments in relation to Recommendation 4A that:

"The design of this option is not prescriptive as to whether services use wholly or mainly human or automated content moderation processes." [Volume 12, para 12.50]

Thus there appears to be the potential four-way interaction between internal content moderation policies, the statutory takedown obligation, the Recommendation 4A takedown recommendation, and the provider's public terms of service. 

How these might mesh with each other is not immediately clear to this reader. In part this could depend on questions of interpretation of the Act, such as whether awareness for purposes of the statutory takedown obligation requires human awareness or can be satisfied by an automated system, and if so whether awareness equates to reasonable grounds to infer under S.192.    

Overall, the scope for arbitrary interference on user rights of freedom of expression appears to be greater for fraud keyword detection than with CSAM hash and URL matching.

The question of safeguards for proactive, automated detection systems is due to raise its head again. Ofcom has said that it is planning an additional consultation later this year on how automated tools, including AI, can be used to proactively detect illegal content and content most harmful to children – including previously undetected child sexual abuse material.

30 July 2024. Correction to description of 'high hurdle'. 


Monday 22 July 2024

The Online Safety Act illegality duties - a regime about content?

Part 3 of a short series of reflections on Ofcom’s Illegal Harms consultation under the Online Safety Act 2023 (OSA). 

This post analyses the illegality duties created by the OSA. That is not as simple as one might hope. The intrepid reader has to hack their way through a thicket of separately defined, subtly differing, duties. Meanwhile they must try to cope with a proliferation of competing public narratives about what the Act is - or ought to be - doing, such as the suggestion that the regime is really about systems and processes, not content. 

The specific safety duties imposed by the OSA are the logical starting point for an analysis of what the Act requires of service providers. I have categorised the duties in a way which, it is hoped, will illuminate how Ofcom has approached the consultation.  

At the highest level the Act’s illegality duties are either substantive or risk assessment. The substantive duties may require the service provider to take measures affecting the design or operation of the service. The outcomes of the obligatory risk assessment feed into some of the substantive duties.

In my proposed categorisation the substantive service provider duties fall into three categories: content-based, non-content-based and harm-based. A content-based duty is framed solely by reference to illegal content, with no mention of harm. A non-content-based duty makes no mention of either content or harm, but may refer to illegal activity. A harm-based duty is framed by reference to harm arising from or caused by illegal content or activity.  

The tables below divide the various substantive duties into these categories.

The risk assessment duties are a more extensive and complex mixture of duties framed by reference to illegal content, illegal activities and harm.

The differences in framing are significant, particularly for the substantive duties, since they dictate the kinds of measures that could satisfy the different kinds of duty. For instance:

An operational substantive content-based duty is focused on identifying and addressing items of illegal content — such as by reactively removing and taking them down. Recommended measures have to reflect that. (But see discussion below of the difference between a design duty and an operational duty in the context of a proactive, preventive duty.)  

substantive duty to mitigate risk of harm is less focused. A recommended measure would not necessarily involve evaluating the illegality of content. It could consist of a measure that either does not affect content at all, or does so content-agnostically. A reporting button would be an example of such a measure, as would a cap on the permissible number of reposts for all content across the board.

But a recommended measure within this category might yet have content-related aspects. It could be suggested, for instance, that in order to mitigate the risk of harm arising from a particular kind of illegal content the provider should identify content of that kind and take recommended steps in relation to it.

Similarly a risk assessment duty could involve the service provider in evaluating individual items of content, even if the duty ranges more widely: 

“Your risk assessment should not be limited to an assessment of individual pieces of content, but rather consider how your service is used overall. In particular, the requirement to assess the risk of the service being used to commit or facilitate priority offences may mean considering a range of content and behaviour that may not amount to illegal content by itself.” [Service Risk Assessment Guidance, Boxout preceding para A5.42.]

It can be seen in the table below that the harm-based risk assessment duties are further divided into content-related and non-content related. The former all make some reference to illegal content in the context of a harm-based duty.

Within the framework of the Act, not all illegality is necessarily harmful (in the Act's defined sense) and not all kinds of harm are in scope. There is a conceptual separation between illegality per se, illegality that causes (or risks causing) harm as defined, and harm that may arise otherwise than from illegality. Those category distinctions have to be borne in mind when analysing the illegality duties imposed by the Act. It bears repeating that when the Act refers to harm in connection with service provider duties it has a specific, limited meaning: physical or psychological harm.

So equipped, we can categorise and tabulate the illegality duties under the Act. For simplicity the table shows only the duties applicable to all U2U service providers, starting with the substantive duties: 

Category

Description

OSA section reference

Substantive duties

 

 

Content-based

Proportionate measures relating to design or operation of service to prevent individuals encountering priority illegal content by means of the service.

10(2)(a)

 

Operate service using proportionate systems and processes designed to minimise the time for which priority illegal content is present.

10(3)(a)

 

Operate service using proportionate systems and processes designed to swiftly take down illegal content where the provider is alerted or otherwise becomes aware of its presence

10(3)(b)

Non-content-based

Proportionate measures relating to design or operation of service to effectively mitigate and manage the risk of the service being used for the commission or facilitation of a priority offence identified in the most recent illegal content risk assessment.

10(2)(b)

Harm-based

Proportionate measures relating to design or operation of service to effectively mitigate and manage the risks of harm to individuals identified in the most recent illegal content risk assessment (see 9(2)(g)).

10(2)(c)

 These are the illegality risk assessment duties:

Category

Description

OSA section reference

Risk assessment duties

 

 

Content-based

Assess level of risk of users encountering illegal content of each kind

9(5)(b)

 

Assess level of risk of functionalities facilitating presence or dissemination of illegal content

9(5)(e)

Non-content-based

Assess level of risk of service being used for commission or facilitation of a priority offence

9(5)(c)

 

Assess level of risk of functionalities facilitating use of service for commission or facilitation of a priority offence

9(5)(e)

 

Assess how the design and operation of the service may reduce or increase the risks identified.

9(5)(h)

Harm-based (content-related)

Assess level of risk of harm to individuals presented by illegal content of different kinds

9(5)(d)

 

Assess nature and severity of harm that might be suffered by individuals from identified risk of harm to individuals presented by illegal content of different kinds

9(5)(d) and (g)

 

Assess nature and severity of harm that might be suffered by individuals from identified risk of individual users encountering illegal content

9(5)(b) and (g)

 

Assess nature and severity of harm that might be suffered by individuals from identified risk of functionalities facilitating presence or dissemination of illegal content

9(5)(e) and (g)

Harm-based (non-content-related)

Assess level of risk of harm to individuals presented by use of service for commission or facilitation of a priority offence

9(5)(d)

 

Assess nature and severity of harm that might be suffered by individuals from identified level of risk of harm to individuals presented by use of service for commission or facilitation of a priority offence

9(5)(d) and (g)

 

Assess nature and severity of harm that might be suffered by individuals from identified risk of service being used for commission or facilitation of a priority offence

9(5)(c) and (g)

 

Assess different ways in service is used and impact of such use on level of risk of harm that might be suffered by individuals

9(5)(f)

 

Assess nature and severity of harm that might be suffered by individuals from identified risk of different ways in service is used and impact of such use on level of risk of such harm

9(5)(f) and (g)

The various harm-based illegality risk assessment duties feed in to the substantive harm mitigation and management duty of S.10(2)(c). That duty stands independently of the three content-based and one non-content-based illegality duties, as can be seen from the tables and is illustrated in this visualisation.


Content or systems and processes?

In March last year the Ofcom CEO drew a contrast between systems and processes and a content regime, suggesting that the OSA is about the former and not really the latter (see Introduction). 

The idea that the Act is about systems and processes, not content, prompts a close look at the differences in framing of the substantive illegality duties. As can be seen from the table above, the proactive prevention duty in S.10(2)(a) requires the service provider to:

“to take or use proportionate measures relating to the design or operation of the service to… prevent individuals from encountering priority illegal content by means of the service” (emphasis added)

Thus a measure recommended by Ofcom to comply with this duty could be limited to the design of the service, reflecting the ‘safe by design’ ethos mentioned in Section 1(3) of the Act. As such, the S.10(2)(a) duty, although content-based, has no necessary linkage to assessing the illegality of individual items of user content posted to the service. That possibility, however, is not excluded. A recommended proactive measure could involve proactively detecting and blocking individual items, as contemplated by the section’s reference to operational measures.

In contrast, the content removal duty in S.10(3)(b) is specifically framed in terms of use of operational systems and processes: 

“(3) A duty to operate a service using proportionate systems and processes designed to … (b) where the provider is alerted by a person to the presence of any illegal content, or becomes aware of it in any other way, swiftly take down such content” (emphasis added)

In view of this drafting it is not surprising that, notwithstanding the reference to proportionate systems and processes, Ofcom in several places characterises the Act as imposing a simple duty to remove illegal content: 

"When services make an illegal content judgement in relation to particular content and have reasonable grounds to infer that the content is illegal, the content must however be taken down." [para 26.14, Illegal Content Judgements Guidance discussion]

“Within the illegal content duties there are a number of specific duties. As part of the illegal content safety duty at section 10(3)(b) of the Act, there is a duty for a user-to-user service to “swiftly take down” any illegal content when it is alerted to the presence of it (the ‘takedown duty’).” [para A1.14, draft Illegal Content Judgements Guidance]

“As in the case of priority illegal content, services are required to take down relevant nonpriority illegal content swiftly when they are made aware of it.” [para 2.41, Volume 1]

“Where the service is alerted by a person to the presence of any illegal content, or becomes aware of it in any other way, services must swiftly take down such content (section 10(3)(b)).” [para 12.6, Volume 4]

The framing of the S.10(3)(b) duty is tightly focused on the removal actions to be taken by a service provider in relation to individual items of content when it becomes aware of them. The service provider must have proportionate systems and processes designed to achieve the result (take down), and must use them in the operation of the service. 

This duty inescapably requires individual judgements to be made. It does not, however, from a supervision and enforcement point of view, require a service provider to get every individual judgement right.

From the perspective of the service provider the focus of Ofcom’s supervision and enforcement will indeed be on its systems and processes:

“It is important to make clear that, as the regulator, Ofcom will not take a view on individual pieces of online content. Rather, our regulatory approach is to ensure that services have the systems and processes in place to meet their duties.” [Overview, p.17; Volume 4, p.19]

“In line with the wider approach to delivering online safety, supervision will focus on the effectiveness of services’ systems and processes in protecting their users, not on individual pieces of content.” [para 30.5, Volume 6]

That does not mean however, that (as Ofcom appears to suggest in the following extract) the regulatory regime is not about regulating individual user content: 

“It is important to note that the Online Safety regime is about service providers’ safety systems and processes, not about regulating individual content found on such services. The presence of illegal content or content that is potentially harmful to children does not necessarily mean that a service provider is failing to fulfil its duties in the Act. We would not therefore be likely to take action solely based on a piece of harmful content appearing on a regulated service.” [Enforcement Guidance, para A3.6]

The purpose of at least the operational content-based aspects of the OSA regime is to harness the control that intermediary service providers can exercise over their users in order (indirectly) to regulate individual items of user content. The fact that a service provider may not be penalised for an individual misjudgement does not alter the fact that the service provider has to have a system in place to make judgements and that the user in question stands to be affected when their individual post is removed as a result of a misjudgement. The operational content-based duties exist and are about regulating individual content.

Parenthetically, it is also perhaps doubtful whether Ofcom can, in reality, perform its role entirely in the abstract and avoid making its own judgements on individual items of content. For instance, if prevalence of illegal material on a service is relevant to Ofcom’s enforcement priority assessment, how could that be determined without evaluating whether individual items of user content on the service are illegal?

Some may believe that Ofcom's approach is too heavily weighted towards individual content judgements by service providers. However, it is difficult to see how such content-based measures can be avoided when content-based duties — removal and takedown of individual items of illegal content in the operation of the service - are hardwired into the Act, requiring the exercise of judgement to determine whether the content in question is or is not illegal.

That said, in the area of the S.10(2) proactive preventive measures Ofcom has, as already alluded to, much broader scope to assess proportionality and decide what kinds of measures to recommend (or not) and what safeguards should accompany them. That will be the subject of Part 4 of this series.




The Online Safety Act illegality duties: a regime about harm?

This is Part 2 of a short series of reflections on Ofcom's Illegal Harms Consultation under the Online Safety Act 2023 (OSA). Ofcom are currently in the process of considering submissions following closure of the consultation in February 2o24.

The very title of the Ofcom consultation — Illegal Harms — prompts questions about the illegality duties. Are they about illegal content? Are they about harm? Is all illegal content necessarily harmful? What does the Act mean by harm?

What is meant by harm?

The answer to the last question ought to be simple. For the purpose of the safety duties harm means “physical or psychological harm". For the remaining questions, the devil resides in the tangled undergrowth of the Act (discussed in Part 3, analysing and categorising the Act's Illegal Content duties). 

However, the Act’s specific meaning of harm is often glossed over.  Ofcom’s Quick Guide to illegal content risk assessments mentions ‘harm’ or ‘illegal harm’ 16 times without pointing out that for the purpose of the safety duties harm has a defined meaning. Considering how many of the illegal content risk assessment duties are framed by reference to harm (see table in Part 3), it is striking that neither the consultation section explaining Ofcom’s approach to the risk assessment duty (Volume 3), nor the draft Illegality Risk Assessment Guidance itself (Annex 5), mentions the Act’s specific meaning of harm.

The four page consultation Overview is similarly lacking, while mentioning ‘harm’ or ‘illegal harm’ 16 times. Neither is the definition mentioned in Ofcom’s 39 page summary of each chapter of the consultation, nor in the consultation’s 38 page Volume 1 Background to the Online Safety regime.

At the start of Volume 2 of the consultation (the Ofcom Register of Risks for illegal content) we do find:

“The Online Safety Act (the Act) requires Ofcom to carry out sector-wide risk assessments to identify and assess the risk of physical and psychological harm to individuals in the UK presented by regulated user-to-user (U2U) and search services, and to identify characteristics relevant to such risks of harm.” [para 5.1]

The footnote to that paragraph says:

“‘Risks of harm’ refers to the harm to individuals presented by (a) content on U2U or search services that may amount to the offences listed in the Act, and (b) the use of U2U services for the commission and/or facilitation of these offences (collectively, the ‘risks of harm’). ‘Harm’ means physical or psychological harm; we discuss physical or psychological harm as part of our assessment of the risks of harm.”

The Register of Risks section then continues to emphasise the Act’s specific meaning of harm.  Of the four separate glossaries and lists of definitions contained in the consultation papers, only the Register of Risks Glossary includes the Act’s definition of harm. 

A footnote to a paragraph in the Register of Risks section which references an Ofcom survey acknowledges the risk of overreach in unbounded references to harm:

“63% of internet users 13 years old and over had seen or experienced something potentially harmful in the past four weeks.[Note: these may capture a broad range of potentially harmful experiences that go beyond illegal harms] Source: Ofcom, 2022. Online Experiences Tracker. [accessed 10 September 2023].” [footnote 22, para 6.1]

To give an example, the survey in question prompted respondents that potential harm included “Generally offensive or ‘bad’ language, e.g. swearing, rudeness”. 

The OSA's illegality duties have not generally embraced the broader and vaguer concepts of societal harm that were discussed at the time of the Online Harms White Paper. Nevertheless, the Ofcom consultation is not immune from straying into the territory of societal harm:

“In most cases the harms we have looked at primarily affect the individual experiencing them. However, in some cases they have a wider impact on society as a whole. For instance, state-sponsored disinformation campaigns can erode trust in the democratic process. All this underlines the need for the new legislation and shows that, while many services have made significant investments in tackling online harm in recent years, these have not yet been sufficient.” [Boxout, Volume 2, p.8]

A state-sponsored disinformation campaign could be relevant to this consultation only if it constitutes an offence within the purview of the Act (e.g. the new Foreign Interference offence). Even then, only a risk of physical or psychological harm to an individual could be relevant to determining what kinds of illegality safety duty might be triggered: content-based, non-content-based or harm-based (as to which, see the discussion in Part 3 of the different categories of duty created by the Act)For the purpose of the Act's illegality safety duties the “wider impact on society as a whole” is not a relevant kind of harm. 

Returning to the consultation title, the term ‘Illegal Harm’ does not appear in the Act. The consultation Glossary essays a definition: 

"Harms arising from illegal content and the commission and facilitation of priority offences".

Volume 1, which describes the illegal content duties, contains a slightly fuller version:

“‘illegal harm’ – this refers to all harm arising from the relevant offences set out in the Act, including harm arising from the presence of illegal content online and (where relevant) the use of online services to commit or facilitate priority offences. …” [Volume 1, para 1.23]

‘Harm’, as already mentioned, is defined in the Act as physical or psychological harm. 

If that definition of Illegal Harm is meant to describe the overall subject-matter of the illegality duties it is incomplete, since in its terms it can apply only to those illegality duties that are framed by reference to harm.

In any event the consultation does not use the phrase ‘Illegal Harms’ consistently with that definition.  It sometimes reads as a catch-all for the illegality duties generally. Often it refers to the underlying offences themselves rather than the harm arising from them. Thus on the second page of the Consultation at a Glance: “The 15 different kinds of illegal harms set out in Ofcom’s draft risk assessment guidance are: Terrorism offences…”.

In places it is difficult to be sure in what sense the term ‘illegal harm’ is being used, or it changes from one paragraph to the next. An example is in the Risk Assessment Guidance:

“A5.23 You must assess the risk of each kind of illegal harm occurring on your service. U2U services need to consider the risk of:

Illegal content appearing on the service – for example, content inviting support for a proscribed organisation (e.g. a terrorist group);

An offence being committed using the service – for example, a messaging service being used to commit grooming offences, in a situation where adults can use the service to identify and contact children they do not know; and

An offence being facilitated by use of the service – for example, the use of an ability to comment on content to enable harassment.

A5.24 You must assess the likelihood of these illegal harms taking place, and the potential impact (i.e. the nature and severity of harm to individuals).

A5.25 As long as you are covering all of the risks of harm to individuals, you can assess these three aspects together when you assess each kind of illegal harm.”

In para A5.23 ‘illegal harm’ is being used to describe three different varieties of risk assessment duty: Sections 9(5)(b),(c) and (e). None of those is framed in the Act by reference to harm. Nor does the Glossary definition of illegal harm fit the usage in this paragraph. 

Paras A5.24 and A5.25 then refer to harm to individuals. That reflects a duty which is framed by reference to harm (Section 9(5)(g)). It is a duty to which the defined meaning of physical or psychological harm applies.

Relatedly, shortly afterwards the Guidance says:

“In the risk assessment, the key objective is for you to consider in a broad way how your service may be used in a way that leads to harm.” (Box-out following A5.41)

It does not mention the Act’s definition of harm (assuming, as presumably it must do, that that is what the Guidance means by harm in this context).

Shorthand is probably unavoidable in the challenging task of rendering the consultation and the OSA understandable. But when it comes to describing a key aspect of the safety duties, shorthand can result in confusion rather than clarity. The term 'illegal harm' is especially difficult, and might have been best avoided. Given its specific meaning in the Act, rigorous and clear use of the term ‘harm' is called for.