This is Part 5 of a series of reflections on Ofcom’s Illegal Harms Consultation under the Online Safety Act (OSA). Ofcom’s consultation (which closed in February 2024) ran to a mammoth 1728 pages, plus an additional 77 pages in its recent further consultation on torture and animal cruelty. The results of its consultation are expected in December.
For readers not fully conversant with the OSA, the reason why Ofcom has to consult at all is that the OSA sets out most of the illegal content service provider duties in stratospherically high-level terms, anticipating that Ofcom will bring the obligations down to earth by means of concretely articulated Codes of Practice and Guidance. If the Act were an algorithm, this would be a non-deterministic process: there is no single answer to the question of how the high-level duties should be translated into detailed measures. The number and range of possibilities are as good as infinite.
The main contributor to this state of affairs is the way in which the Act frames the service providers’ duties as requirements to put in place “proportionate” systems and processes designed to achieve stipulated aims. That leaves tremendous latitude for debate and judgement. In simple terms, Ofcom’s task is to settle on a set of systems and processes that it considers to be proportionate, then embody them in concrete Codes of Practice, recommended measures and guidance. Those proposed documents, among other things, are what Ofcom has been consulting on.
Of course Ofcom does have to work within the statutory constraints of the Act. It cannot recommend measures that stray outside the boundaries of the Act. The measures that it does recommend should interact sensibly with the duties defined in the Act. For abstractly expressed duties, that presents little problem. However, a tightly drawn statutory duty could have the potential to collide with specific measures recommended by Ofcom.
Awareness of illegality
One such duty is Section 10(3)(b). This requires a U2U service provider to have proportionate systems and processes in place designed swiftly to take down illegal content upon becoming aware of it. This is a relatively concrete duty, verging on an absolute takedown obligation (see discussion in Part 3 of this series).
A service provider will therefore need to understand whether - and if so at what point – the takedown obligation kicks in when it is implementing Ofcom’s operational recommendations and guidance. That turns on whether the service provider has ‘become aware’ of the presence of illegal content.
Behind that innocuous little phrase, however, lie significant issues of interpretation. For instance, if an automated system detects what it thinks is illegal content, does that trigger the Section 10(3)(b) takedown duty? Or is it triggered only when a human becomes aware? If human knowledge is necessary, how does that square with Section 192, which requires a provider to treat content as illegal if it has reasonable grounds to infer illegality – and which specifically contemplates fully automated systems making illegality judgements?
Ofcom’s consultation does not spell out in terms what interpretations have been assumed for the purposes of the draft Codes of Practice, Guidance and other documents that Ofcom is required to produce. It is thus difficult to be sure how some aspects of the proposed recommended measures are intended to mesh with S.10(3)(b) of the Act.
This table lists out the questions of interpretation of S.10(3)(b).
S.10(3)(b) duty |
Interpretation question |
Significance |
“A duty to operate a service using
proportionate systems and processes designed to … where the provider is
alerted by a person to the presence of any illegal content, or becomes aware
of it in any other way, swiftly take down such content.” |
Does “becomes aware” mean that a human
being has to be aware? |
Some of Ofcom’s recommendations involve
automated detection methods. Could the swift takedown duty kick in during
automated detection, or does it apply only if the content is passed on to a
human moderator? |
Does “aware” mean the same as “reasonable
grounds to infer” in S.192(5) and (6) (illegal content judgements)? |
If the provider has reasonable grounds to
infer that content is illegal, it must treat it as such (S.192(5)). Does that
mean it must swiftly take it down under S.10(3)(b), or does “aware” set a
different threshold?
|
|
If “aware” means the same as “reasonable
grounds to infer” in S.192(5) and (6), is the answer to the ‘human being’
question affected by the fact that S.192 expressly contemplates that a
judgement may be made by automated systems alone? |
It is also noteworthy that the obligation under Section 66 to refer previously undetected and unreported CSEA content to the National Crime Agency is triggered by the provider becoming ‘aware’ of the content – again, not further defined. In the context of S.66, the Information Commissioner in its submission to the Ofcom Illegal Harms consultation observed:
“Our reading of measure 4G is that it could allow for the content moderation technology to be configured in such a way that recognises that false positives will be reported to the NCA. Whilst we acknowledge that it may not be possible to completely eliminate false positives being reported, we are concerned that a margin for error could be routinely “factored into” a service’s systems and processes as a matter of course. This is unlikely to be compatible with a service taking all reasonable steps to ensure that the personal data it processes is not inaccurate.
We therefore
consider that services should be explicitly required to take into account the
importance of minimising false positives being reported to the NCA.”
Human awareness only? Consider a hypothetical Code of Practice measure that recommends automated detection and blocking of a particular kind of illegal user content. Can detection by an automated system constitute the service provider becoming aware of it, or (as an English court in McGrath v Dawkins, a case concerning the eCommerce Directive hosting shield, appears to have held) only if a human being is aware?
If the latter, then Ofcom's hypothetical recommendation will not interact with S.10(3)(b). If the former, then the possibility that the S.10(3)(b) removal obligation would be triggered during automated detection has to be factored in. The Ofcom consultation is silent on the point.
Awareness threshold Relatedly, what is the threshold for awareness of illegal content? S.10(3)(b) has similarities to the eCommerce Directive hosting liability shield. Eady J said of that provision: “In order to be able to characterise something as ‘unlawful’ a person would need to know something of the strength or weakness of available defences” (Bunt v Tilley). Has that standard been carried through to S.10(3)(b)? Or does the standard defined in S.192 OSA apply?
S.192 stipulates the approach to be taken where a system or process operated or used by a provider of a service for the purpose of compliance with duties under the Act involves a judgement by a provider about whether content is illegal content:
"In making such judgements, the
approach to be followed is whether a provider has reasonable grounds to infer
that content is content of the kind in question (and a provider must treat
content as content of the kind in question if reasonable grounds for that
inference exist)."
In marked contrast to Eady J's interpretation of the ECommerce Directive hosting shield, S.192 goes on to say that the possibility of a defence is to be ignored unless the provider positively has reasonable grounds to infer that a defence may be successfully relied upon.
The OSA does not address the interaction between S.10(3)(b) and S.192 in terms, contenting itself with a cryptic cross-reference to S.192 in the definition of illegal content at S.59(16):
"See also section 192 (providers'
judgements about the status of content)".
The Ofcom consultation implicitly takes the position that awareness (at any rate by a human moderator — see Automated Illegal Content Judgements below) is synonymous with the S.192 standard:
"When services make an illegal content
judgement in relation to particular content and have reasonable grounds to
infer that the content is illegal, the content must however be taken down"
(Illegal Judgements Guidance Discussion, para 26.14)
Mixed automated-human illegal content judgements.
Returning to our hypothetical Code of Practice measure that recommends automated detection and blocking of a particular kind of illegal user content, such a system would appear to involve making a judgement about illegality for the purpose of S.192 regardless of whether a removal obligation under S.10(3)(b) is triggered.
If an automated detection system flags up posts for subsequent human review, the final word on illegality rests with human moderators. Does that mean that their judgement alone constitutes the illegality judgement for the purpose of S.192? Or is the initial automated triage also part of the illegality judgment? S.192 contemplates that ‘a judgement’ may be made by means of ‘automated systems or processes together with human moderators’. That may suggest that a combined judgement comprises the whole system or process.
If so, does that imply that the initial automated detection, being part of the illegal content judgement process, could not apply a higher threshold than the ‘reasonable grounds to infer’ test stipulated by S.192?
That question assumes (as does S.192 itself) that it is possible to embed within any given technology an inference threshold articulated in those terms; which brings us to our next topic.
Automated illegal content judgements One of the most perplexing aspects of the OSA has always been how an automated system, operating in real time on limited available information, can make accurate judgements about illegality or apply the methodology laid down in S.192: such as determining whether it has reasonable grounds to make inferences about the existence of facts or the state of mind of users.
Undaunted, s.192 contemplates that illegality judgments may be fully automated:
“... whether a
judgement is made by human moderators, by means of automated systems or
processes or by means of automated systems or processes together with human
moderators."
The OSA requires Ofcom to provide Guidance to service providers about making illegality judgements. It has produced a draft document, running to 390 pages, setting out how the S.192 criteria should be applied to every priority offence and a few non-priority offences.
Ofcom's draft Guidance appears to assume that illegality judgements will be made by human moderators (and implicitly to equate awareness under S.10(3)(b) with reasonable grounds to infer under s.192):
"The
process of making an illegal content judgement, as set out in the Illegal
Content Judgement Guidance, presupposes that the content in question has been
brought to the attention of a moderator making such a judgement, and as a
result [the S.10(3)(b) awareness] requirement is fulfilled." (Illegal
Judgements Guidance Discussion, para 26.14 fn 5)
Human involvement may be a reasonable assumption where decisions are reactive. However, Ofcom has included in its draft Codes of Practice proactive prevention recommendations that are either automated or at least encompass the possibility of fully automated blocking or removal.
Annex 15 to the consultation discusses the design of various kinds of automated detection, but does not address the possibility that any of them involves making an illegal content judgement covered by S.192.
In apparent contrast with the human moderation assumed in the footnote quoted above, the Illegal Content Judgements Guidance also describes itself as 'technology-agnostic'.
“26.38 Our draft guidance
therefore proposes a ‘technology-agnostic approach’ to reasonably available
information and to illegal content judgements in general. We have set out which
information we believe is reasonably available to a service, regardless of
technology used to collect it, on an offence-by-offence basis. It is our
understanding that, while automated tools could be used to collect more of this
information or to do so more quickly, there is no additional class of
information which automated tools could have access to that human moderators
could not. We therefore take the view that information may be collected using
any approach the service prefers, so long as when it is factored into an
illegal content judgement, this is done in a way which allows a reasonable
inference to be made.”
and:
"A1.42 We have recommended
three automated content technologies in our Codes of Practice; hashing
technology recognising child sexual abuse material; URL detection technology
recognising URLs which have previously been identified as hosting child sexual
abuse material (CSAM); and search to detect content containing keywords
strongly associated with the sale of stolen credentials (i.e. articles for use
in fraud). These technologies do not offer an additional class of information
that human moderators could not. We therefore take a 'technology-agnostic
approach' to illegal content judgements."
The usual concern about reasonably available information, however, is not that automated content moderation technologies will have additional information available to them compared with human moderators, but that they will tend to have less. Moreover, they will be required to make decisions based on that information on the fly, in real time. Consequently such decisions are liable to be less accurate than those of human moderators, even if automated technology could be regarded as otherwise equivalent to a human being in its ability to make judgements.
The thinking may be that since the elements of a given offence, and the evidence required to establish reasonable grounds to infer, are in principle the same regardless of whether illegality judgements are made by automated systems or human beings, there is no need to differentiate between the two in the Guidance.
However, it seems artificial to suggest (if that is what is being said) that automated illegality judgements do not give rise at least to practical, and quite likely deeper, issues that differ from those raised by human judgements. The “technology-agnostic” label is not, in truth, a good description. The draft guidance may be agnostic, but if so the agnosticism is as to whether the judgment is made by a human being or by technology. That is a quite different matter.
Ofcom’s automated moderation recommendations
This brings us to Ofcom's specific automated moderation recommendations. Do any of them involve making illegal content judgements to which S.192 would apply? For simplicity this discussion focuses on U2U service recommendations, omitting search engines.
To recap, Ofcom recommends three kinds of U2U automated detection and blocking or removal of illegal content (although for different categories of service in each case):
• Perceptual hash matching against a database of known CSAM material (draft U2U Code of Practice, A4.23)
• URL matching against a list of known CSAM URLs (draft U2U Code of Practice, A4.37)
• Fuzzy keyword
matching to detect articles for use in fraud (draft U2U Code of Practice,
A4.45)
Each of these recommendations envisages that at least some moderation decisions will be taken without human involvement.
For CSAM perceptual hash matching the draft Code of Practice provides that the provider should ensure that human moderators are used to review "an appropriate proportion" of content detected as CSAM. The remainder, implicitly, would be swiftly taken down or blocked automatically in accordance with draft Code of Practice para A4.24, without human review. The draft CoP sets out how a service provider should go about deciding what proportion of detected content it is appropriate to review.
For CSAM URL matching the draft Code of Practice contains no provision for human review.
For fraud detection using fuzzy keyword matching the draft U2U Code of Practice requires the provider to consider the detected content in accordance with its internal content policies. The consultation explains that:
"…
all large services and those that have assessed themselves as having a medium
or high risk for any type of offence should set internal content policies which
specify how content moderation systems and processes moderate content and
resource them accordingly." [14.230] fn 254.
Such policies could include automatic takedown of detected items. Whilst Ofcom say that “we are not recommending that services take down all content detected by the technology' ([14.249]), such action is within the range of the recommended measure.
"Implementations that
substantially impact on freedom of expression, including the automatic take
down of detected content, could be in accordance with the measure in our Code
of Practice.” [14.283]
The reliance on internal moderation policies appears to be intended to provide services with discretion as to what steps to take with automatically detected content:
"… whether
or not such content were, incorrectly, subject to takedown would depend on the
approach, to content moderation adopted by the service, rather than the
content's detection by the keyword detection technology in and of itself."
[14.284]
Whilst the draft Code of Practice provides for human review of a reasonable sample of detected content, that appears to be a periodic, after the event, review rather than part of the decision-making process.
Do any of these three recommended systems and processes involve a S.192 judgement "by the provider" as to whether the detected user content is illegal?
Even for URL matching, where the detection and removal or blocking process is entirely mechanistic, the answer is at least arguably yes. It would be quite odd if the fact that a provider is relying on a pre-verified third party list of URLs meant that the provider was not making an illegality judgement, given that the very purpose of the overall system or process is to distinguish between legal and illegal content.
The same argument applies to perceptual hashing, but more strongly since there is an element of judgement involved in the technical detection process as well as in compiling the list or database.
The fuzzy keyword fraud detection recommendation is more obviously about making judgements. The draft Code of Practice recommends that fuzzy keyword technology should be used to assess whether content is 'likely' to amount to an offence (although elsewhere in the Consultation Ofcom uses the phrase 'reason to suspect'). If so, an item of content would then be considered in accordance with the provider's internal policies.
Where in the process an illegality judgement is being made could vary depending on the provider's policy. If detected content is submitted for human review, then it may be plausible to say that the illegality judgement is being made by the human moderator, who should make the decision in accordance with the 'reasonable grounds to infer' approach set out in S.192 and any relevant data protection considerations.
Alternatively, and as already discussed perhaps more in keeping with the language of S.192, the sequential automated and human elements of the process could be seen as all forming part of one illegality judgement. If so, then we could ask how Ofcom’s suggested ‘likely’ standard for the initial automated detection element compares with S.192’s ‘reasonable grounds to infer’. If it sets a higher threshold, is the system or process compliant with S.192?
If detected content is not submitted for human review, the answer to where the illegality judgement is being made could depend on what processes ensue. If takedown of detected content is automatic, that would suggest that the initial triage constituted the illegality judgement. If other technical processes are applied before final decision, then it may be the final process, or perhaps the overall combination, that constitutes the illegality judgement. Either way it is difficult to see why an illegality judgement is not being made and why the S.192 provisions would not apply.
It must be at least arguable that where automatic removal of automatically detected user content is within the range of actions contemplated by a Code of Practice recommendation, an illegality judgement governed by S.192 is being made either at some point in the process, or that the process as a whole constitutes such a judgement.
Nevertheless, neither the draft Illegal Judgements Guidance nor the draft Codes of Practice address the potential interaction of S.192 (and perhaps S.10(3), depending on its interpretation) with automated illegality judgements.
No comments:
Post a Comment
Note: only a member of this blog may post a comment.