Assiduous readers of this blog will know of my fondness for working through concrete examples to illustrate how, once they come into force (now likely to be in Spring next year), platform illegal content duties under the UK Online Safety Act 2023 (OSA) might pan out in practice.
A recurring theme has been that making judgements about the
legality or illegality of user content, as platforms are required to do by the
OSA, is not a simple matter. The task verges at times on the impossible:
platforms are required to make complex legal and factual judgements on incomplete
information. Moreover, the OSA stipulates a relatively low threshold for a
platform to conclude that content is illegal: reasonable grounds to infer. The
combined result is that the OSA regime is likely to foster arbitrary decisions
and over-takedown of legal user content.
The newest opportunity to hypothesise a concrete example is presented by the acquittal of Marieha Hussain, who was charged with a racially aggravated public order offence for carrying, at a pro-Palestine demonstration, a placard depicting Rishi Sunak and Suella Braverman as coconuts. The prosecution alleged that this was a well-known racial slur. The district judge held that it was part of the genre of political satire, and that the prosecution had not proved to the criminal standard that it was abusive.
Ms Hussain was prosecuted for an offence in a public street,
to which the Online Safety Act would not directly apply. However, what if an
image of the placard appeared online? If displaying the placard in the street
was sufficient to attract a criminal prosecution, even if ultimately
unsuccessful, could the OSA (had it been in force) have required a platform to
take action over an image of the placard displayed online?
As it happens the prosecution in Marieha Hussain’s case was prompted
by someone posting a photograph of the placard online, accompanied by a
critical comment. That was followed by a response from the Metropolitan Police,
who were tagged in the post:
If the Online Safety Act duties were in force (and assuming that the court had not yet delivered its acquittal verdict), how would a service provider have to go about deciding whether an online post of a photograph of the placard should be treated as illegal? How would that differ from the court process? Could the differences lead a service provider to conclude that a post containing an image of the placard should be removed? Could (or should) the fact that a prosecution had been instigated for display of the placard in the street, or (before that) that the police had indicated an interest, affect the platform’s illegality judgement?
The prosecution
As far as can be understood from the press reports, Ms Hussain
was prosecuted for a racially aggravated offence under Section 5 of the Public
Order Act 1986. The Section 5 offence (so far as relevant to this example) is:
“(1) A person is guilty of an
offence if he—
(a) uses… abusive words or
behaviour…, or
(b) displays any writing, sign
or other visible representation which is… abusive,
within
the hearing or sight of a person likely to be caused harassment, alarm or
distress thereby.
(2) An offence under this
section may be committed in a public or a private place, except that no offence
is committed where the words or behaviour are used, or the writing, sign or
other visible representation is displayed, by a person inside a dwelling and
the other person is also inside that or another dwelling.
(3) It is a defence for the
accused to prove—
(a) that he had no reason to
believe that there was any person within hearing or sight who was likely to be
caused harassment, alarm or distress, or
(b) that he was inside a
dwelling and had no reason to believe that the words or behaviour used, or the
writing, sign or other visible representation displayed, would be heard or seen
by a person outside that or any other dwelling, or
(c) that
his conduct was reasonable.
Additionally, someone is guilty of the offence only if they intend
their words or behaviour, or the writing, sign or other visible representation,
to be… abusive, or are aware that it may be… abusive.
The racially aggravated version of the offence (which
carries a larger fine) applies if the basic offence is committed and:
“(a) at the time of committing
the offence, or immediately before or after doing so, the offender demonstrates
towards the victim of the offence hostility based on the victim’s membership
(or presumed membership) of a racial …
group; or
(b) the offence is motivated
(wholly or partly) by hostility towards members of a racial… group based on their membership of that
group.”
The ‘victim’ for the purpose of (a) is the person likely to
be caused harassment, alarm or distress.
Both offences are triable only in the magistrates’ court. If
the defendant is acquitted of the racially aggravated offence the court may go
on to consider the basic offence, but only if it is charged in the alternative (which
the CPS Charging Guidance says it should be).
Priority offences
Both the basic offence under Section 5 and the racially
aggravated version are within the scope of the Online Safety Act. They are
listed in Schedule 7 as ‘priority offences’. As such, not only is a service
provider required swiftly to take down illegal content if it becomes aware of
it (OSA Section 10(3)(b)), but it may be required to take proportionate
proactive prevention measures (OSA Section 10(2)(a)).
The Section 5 offence attracted attention during the Online
Safety Bill’s passage through Parliament. On 19 May 2022 the Chair of the Joint
Parliamentary Committee on Human Rights, Harriet Harman MP, wrote to the then
Secretary of State, Nadine Dorries. She said:
“It is hard to see how providers,
and particularly automated responses, will be able to determine whether content
on their services fall on the legal or illegal side of this definition”.
She went on:
“…how will a provider of
user-to-user services judge whether particular words or behaviour online are
“abusive” rather than merely offensive and whether or not they are likely to
cause someone “distress” sufficient to amount to a criminal offence?”
and
“Will the inclusion of section 5
Public Order Act 1986 within the category of priority illegal content, in
practice, result in service providers removing content that does not meet the
criminal threshold, potentially resulting in an interference with the Article
10 rights of users?”
The DCMS Minister, Chris Philp MP, replied on 16 June 2022. In
response to the specific questions about Section 5 he recited the general provisions
of the Bill.
JUSTICE, in its Lords Second Reading Briefing, elaborated on
the concerns of the Joint Human Rights Committee and called for Section 5 to be
removed from the category of priority illegal content. That did not happen.
So far, so clear. Now the picture starts to get foggy, for a
variety of reasons.
Making an Illegal Content Judgement
First, is either version of the Section 5 offence capable
of applying online at all? Inclusion of the Section 5 offence in Schedule
7 is not conclusive that it can be committed online. The reason for inclusion
of offline offences is that, in principle, it is possible to encourage or
assist online an offence that can only be committed offline. Such inchoate
offences (plus conspiracy, aiding and abetting) are also designated as priority
offences. (Parenthetically, applying the inchoate offences to online posts
presents its own problems in practice – see here.)
One potential obstacle to applying the Section 5 offences
online is the requirement that the use or display be: “within the hearing or
sight of a person likely to be caused harassment, alarm or distress thereby”.
Does this require physical presence, or is online audibility or visibility
sufficient? If the latter, must the defendant and the victim (i.e. the person
likely to be caused harassment, alarm or distress) be online simultaneously?
The Law Commission considered the simultaneity point in its consultation on Modernising Communications Offences, concluding that the point was not clear.
Ofcom, in its draft Illegal Content Judgements Guidance,
does not address the question expressly. It appears to assume that the “within
hearing or sight” condition can be satisfied online. That may be right. But it
is perhaps unfortunate that the Act provides no mechanism for obtaining an
authoritative determination from the court on a point of law this kind.
Second, which offence should be considered? CPS
practice is to charge the more serious racially aggravated offence if there is
credible evidence to prove it. Under the Online Safety Act, the opposite applies:
the simpler, less serious offence should be the one adjudged. The Ofcom
consultation documents explain why:
“In theory, in order to identify
a racially aggravated offence, the service would not only need to identify all
the elements of the Public Order Act offence, but also all the elements of
racial or religious aggravation. But in practice, in order to identify the
content as illegal content, the service would only need to show the elements of
the underlying Public Order Act priority offence, because that would be all
that was needed for the takedown duty to be triggered. The racial aggravation
would of course be likely to make the case more serious and urgent, but that
would be more a matter of prioritisation of content for review than of
identifying illegal content.” [26.81]
Third, how strong does the evidence of an offence have to
be?
In court, a criminal offence has to be proved beyond
reasonable doubt. The district judge in the Hussain case concluded that the
placard was: “part of the genre of political satire” and that as such, the
prosecution had “not proved to the criminal standard that it was abusive”. The
prosecution had also not proved to the criminal standard that the defendant was
aware that the placard may be abusive. The court reached those decisions after
a two day trial, including evidence from two academic expert witnesses called
by the defence to opine on the meaning of ‘coconut’.
A service provider, however, must treat user content as
illegal if it has “reasonable grounds to infer” that it is illegal. That is a
lower threshold than the criminal standard.
Could that judgement be affected by the commencement of a
criminal prosecution? The Director of Public Prosecutions’ Charging Guidance says
that for a criminal prosecution to be brought the prosecutor: “must be
satisfied that there is sufficient evidence to provide a realistic prospect of
conviction…” It must be “more likely than not” that “an objective, impartial
and reasonable jury, bench of magistrates or a judge hearing a case alone,
properly directed and acting in accordance with the law, would convict the
defendant of the charge alleged.”
Whether “reasonable grounds to infer” is a lower threshold than
the “more likely than not to convict” Charging Guidance test for commencing a
prosecution is a question that may merit exploration. If (as
seems likely) it is lower, or even if it is just on a par, then a platform could perhaps
be influenced by the fact that a prosecution had been commenced, in the light
of the evidential threshold for that to occur. However, it does not follow from
commencement of a prosecution for a street display that the charging threshold
would necessarily be surmounted for an online post by a different person.
The more fundamental issue is that the lower the service
provider threshold, the more likely that legal content will be removed and the
more likely that the regime will be non-compliant with the ECHR. The JUSTICE House
of Lords briefing considered that ‘reasonable grounds to infer’ was a ‘low bar’,
and argued that provisions which encourage an overly risk-averse approach to
content removal, resulting in legitimate content being removed, may fall short
of the UK’s obligations under the ECHR.
The Ofcom consultaion observes:
“What amounts to reasonable
grounds to infer in any given instance will necessarily depend on the nature
and context of the content being judged and, particularly, the offence(s) that
may be applicable.” [26.15]
The significance of context is discussed below. Notably, the
context relevant to criminal liability for a street display of a placard may be
different from that of an online post of an image of the placard by a third
party.
The service provider’s illegal content judgement must also be
made on the basis of “all relevant information that is reasonably available” to
it. Self-evidently, a service provider making a judgement about a user post would
not have the benefit of two days’ factual and expert evidence and accompanying
legal argument, such as was available to the court in the Hussain prosecution. The
question of what information should be regarded as reasonably available to a
service provider is a knotty one, implicating data protection law as well as
the terms of the OSA. Ofcom discusses this issue in its Illegal Harms
consultation, as does the Information Commissioner’s Office in its submission to the Ofcom consultation. The ICO also touches on it in its Content Moderation Guidance.
In order for the Section 10(3)(b) swift takedown obligation
to be triggered, the service provider must have become aware of the illegal
content. Ofcom’s consultation documents implicitly suggest that the awareness
threshold is the same as having reasonable grounds to infer illegality under
Section 192. That equation is not necessarily as clear-cut as might be assumed
(discussed here).
Fourth, whose awareness?
Ms Hussain’s placard was held not to be abusive. The court also held that she did not have the necessary awareness that the placard may be abusive. A service provider faced with an online post of an image of a placard would have to consider whether it had reasonable grounds for an inference that the placard was abusive and that the person who posted it (rather than the placard bearer) had the necessary awareness.
When it comes at least to reposting, Professor Lorna Woods, in her comments on the Ofcom Illegal Content Judgements Guidance, has argued that a requirement to evaluate the
elements of an offence for each person who posts content is too narrow an
interpretation of the OSA:
“The illegal content safety
duties are triggered by content linked to a criminal offence, not by a
requirement that a criminal offence has taken place. … The requirement for
reasonable grounds to infer a criminal offence each time content is posted …
presents an overly restrictive interpretation of relevant content. Such a
narrow perspective is not mandated by the language of section 59, which
necessitates the existence of a link at some stage, rather than in relation to
each individual user. … There is no obligation to look at the mental state of
each individual disseminator of the content”
Professor Woods gives as an example the reposting of
intimate images without consent.
S.59 (which defines illegal content) has expressly to be
read together with S.192 (illegal content judgements). S.192, at first sight, reads like an instruction manual for making a judgement in relation
to each individual posting. Be that as it may, if Professor Woods’ argument is
correct it seems likely for many kinds of offence (even if not for the intimate
images offence) to reintroduce the problems that the Independent Reviewer of Terrorism Legislation identified with S.59 (then Clause 52). The Bill was subsequently amended
to add S.192, it is assumed in response to his criticisms:
“2. ...Intention, and the absence of any defence, lie at the heart of terrorism offending. ...
16. The definition of “terrorism
content” in clause 52(5) is novel because under terrorism legislation content
itself can never “amount to” an offence. The commission of offences requires
conduct by a person or people.
17. Clause 52(3) attempts to
address this by requiring the reader of the Bill to consider content in
conjunction with certain specified conduct: use, possession, viewing,
accessing, publication or dissemination.
18. However, as Table 1 shows,
conduct is rarely sufficient on its own to “amount to” or “constitute” a
terrorism offence. It must ordinarily be accompanied by a mental element and/or
take place in the absence of a defence. …
23. … It cannot be the case that
where content is published etc. which might result in a terrorist offence being
committed, it should be assumed that the mental element is present, and that no
defence is available.
24. Otherwise, much lawful content online would “amount to” a terrorist offence.”
My own subsequent submission to the Public Bill Committee analysed Clause 52, citing the Independent Terrorism Reviewer's comments, and concluded in similar vein:
"Depending on its
interpretation the Bill appears either:
6.21.1 to exclude from
consideration essential ingredients of the relevant criminal offences, thereby
broadening the offences to the point of arbitrariness and/or disproportionate
interference with legitimate content; or
6.21.2 to require arbitrary
assumptions to be made about those essential ingredients, with similar
consequences for legitimate content; or
6.21.3 to require the existence
of those ingredients to be adjudged, in circumstances where extrinsic factual
material pertaining to those ingredients cannot be available to a filtering
system.
In each case the result is
arbitrariness (or impossibility), significant collateral damage to legal
content, or both.”
An interpretation of the OSA that increases the likelihood
of lawful content being filtered or taken down also increases concomitantly the
risk of ECHR incompatibility. (See also, ‘Item by Item Judgements’ below)
On a different point, Ofcom appears to suggest that the
wider and more general the audience for a controversial post, the greater the
likelihood of awareness being inferred:
“A service must also draw an
inference that the person posting the content concerned was at least aware that
their behaviour may be abusive. Such awareness may reasonably be inferred if
the abusive behaviour is very obviously likely to be distressing to most people
and is posted somewhere with wide reach.” [A3.77]
In contrast:
“It is less likely to be reasonably inferred if content is posted to a place where, for example, only persons sharing similar sorts of content themselves are likely to see it.” [A3.77]
Fifth, any defence?
As to the Section 5 defence of reasonable conduct, the district judge said that had it been necessary to go that far, she would have found Ms Hussain's conduct to be reasonable in that she was exercising her right to freedom of expression, and the judge would not have been satisfied that the prosecution was a proportionate interference with her right, or necessary in a democratic society.
Our hypothetical assumes that no court ruling
has been made. If the service provider has concluded that there are reasonable
grounds to infer abusive content and awareness, how should it evaluate the
possibility of a defence such as reasonable conduct?
When making an illegal content judgement a service provider
can only base a judgement on the availability of a defence if it positively has some
reason to infer that a defence to the offence may be successfully relied upon.
That is the effect of OSA S.192(6)(b):
“(6) Reasonable grounds for that inference exist in
relation to content and an offence if … a provider—
(a) has reasonable grounds to
infer that all elements necessary for the commission of the offence, including
mental elements, are present or satisfied, and
(b) does not have reasonable
grounds to infer that a defence to the offence maybe successfully relied upon.”
An obvious instance of positive grounds to infer a Section 5
reasonable conduct defence on the part of the poster would be a comment added to the image.
In a different context (terrorism), Ofcom has reached the
same conclusion as to the need for positive grounds:
“There is a defence of
‘reasonable excuse’ which may be harder for services to make reasonable
inferences about, but they only need to consider it if there are positive
grounds to do so.” [26.93]
Similarly, for the offence of stirring up racial hatred:
“In cases where there are no
reasonable grounds to infer intent it is a defence for a person to show that he
was not aware that the content might be insulting or abusive. However, positive
grounds to infer this would need to be available to the service.” [A3.90]
As to the Section 5 “reasonable conduct” defence, a service provider hypothetically considering the original online post of the Marieha Hussain placard in the absence of a court judgment would have to consider whether, if it considered that there were reasonable grounds to infer that the placard was abusive and that the post satisfied the other elements of the offence, the comment by the poster (in addition to anything inferrable from the nature of the posted image) provided reasonable grounds to infer that a defence of reasonable conduct might be successfully relied upon.
It might also be relevant to consider whether there were reasonable grounds to infer that the original placard holder could have have a reasonable conduct defence for the street display, as the judge in the Hussain case held that she would have done. However, the defence is specific to the conduct of each defendant, not a finding about the nature of the content.
As the judge's remarks demonstrate, consideration of the reasonable conduct defence can result in the service provider making judgements about the necessity and proportionality of the interference with freedom of expression.
Ofcom’s Illegal Content Judgements Guidance says:
“Services should take a
common-sense approach to considering whether the behaviour displayed in the
content could be considered reasonable. For example, it may be reasonable (even
if unwise) to abuse someone in response to abuse.” [A3.68]
Common sense also comes to the aid of the harassment and
distress element of the Section 5 offence:
“Services should consider any
information they hold about what any complainant has said about the emotional
impact of the content in question, and take a common-sense approach about
whether it is likely to cause harassment or distress.” [A3.27]
Appeals to common sense bring to mind the Oxford Reference
definition of palm tree justice:
“Ad hoc legal decision-making,
the judge metaphorically sitting under a tree to make
rulings based on common sense rather than legal principles or rules.”
The perceived value of guidance based on common sense may also
depend on whether one shares the William O. Douglas view that ‘Common sense
often makes good law’ or that of Albert Einstein: “Common sense is the
collection of prejudices acquired by age eighteen”.
In addition to reasonable conduct, Section 5 of the Public Order Act provides a defence “that he had no reason to believe that there was any person within hearing or sight who was likely to be caused harassment, alarm or distress”.
Ofcom suggests that a post that is legal may be rendered illegal through the poster being deprived of the defence as the result of a notification:
“it is a defence if it is reasonable to infer that the person had no reason to believe that there was any person within hearing or sight who was likely to be caused harassment or distress. This is most likely to be relevant where a user is challenging a takedown decision (but of course if the person becomes aware as a result of the takedown decision that such a person was within hearing or sight, the content would become illegal content).” [A3.33]
That and Ofcom’s comment on the relationship between awareness and wide reach are both reminiscent of the concerns about the “harmful communications” offence that was originally included in the Bill, then dropped.
Sixth, what is the significance of context? The Hussain
decision appears to have turned on the court’s finding of what was ‘abusive’ in
the context of the display of the placard (albeit that the racially aggravated
element of the alleged offence inevitably focused attention on whether the
placard was specifically racially abusive).
The Ofcom Illegal Judgments Guidance on the Section 5
offence emphasises the significance of context:
“However, the context should be
taken into account carefully, since abusive content may also carry political or
religious meaning, and will be more likely to be a reasonable exercise of the
right to freedom of expression if it is.” [A3.79]
While some of the context available to a service provider
may be the same as that available to a court (for instance it is apparent on
the face of the image of the Hussein placard that it was a political comment),
much of the available context may be different: different person, different
place, different audience, additional comments, no expert witnesses. Add to
that a different standard of proof and a different statutory framework within
which to judge illegality, and the possibility of a different (most likely more
restrictive) conclusion on legality from that which a court would reach (even
if considering the same version of the offence) is significant.
The last word on context should perhaps go to Ofcom, in its
Illegal Content Judgements Guidance on Section 5:
“We have not given any usage
examples here, due to the particularly strong importance of context to these
judgements.” [A3.81]
Item by item judgements?
While some may argue that the OSA is about systems and processes, not content, there is no doubt (pace Professor Woods’
argument noted above) that at least some of its illegality duties require
platforms to make item by item content judgements (see discussion here). The
duties do not, from a supervision and enforcement point of view, require a
service provider to get every individual judgement right. They do require
service providers to make individual content judgements.
Ofcom evidently expects service providers to make item by
item judgements on particular content, while noting that the function of the
online safety regime is different from that of a court:
“The ‘beyond reasonable doubt’
threshold is a finding that only UK courts can reach. When the ‘beyond
reasonable doubt’ threshold is found in UK courts, the person(s) responsible
for the relevant illegal activity will face criminal conviction. However, when
services have established ‘reasonable ground to infer’ that content is illegal
according to the Act, this does not mean that the user will necessarily face
any criminal liability for the content and nor is it necessary that any user
has been prosecuted or convicted of a criminal offence in respect of such
content. When services make an illegal content judgement in relation to
particular content and have reasonable grounds to infer that the content is
illegal, the content must however be taken down.” [26.14]
Critics of the OSA illegality duty have always doubted the
feasibility or appropriateness of requiring platforms to make individual
content legality judgements, especially at scale. Those coming at it from a freedom of
expression perspective emphasise the likelihood of arbitrary judgements,
over-removal of legal content and consequent incompatibility with the European
Convention on Human Rights.
The ‘systems and processes’ school of thought generally advocates harm mitigation measures (ideally content-agnostic) in preference to item-by-item content
judgements. Relatedly, the Online Safety Network recently suggested in a
Bluesky post that “the government needs to amend the Act to make clear that -
once content has been found to be illegal content – it should continue to be categorised that way”. That would reduce the need for successive item-by-item
illegality judgements in relation to the same content, and would make explicit
what Professor Woods has argued is already the proper interpretation of the Act (see above).
The comments of the Online Safety Network were made in the
specific context of the non-consensual intimate image offence. For offences
where the gravamen lies in the specific nature of the prohibited content, and the role
of any mental element, other condition or defence is secondary (such as ensuring
only that accidental behaviour is not criminalised), there may be some force in
the suggestion that the same content should always be treated in the same way
(at least if the initial finding of illegality has been verified to a high
standard). Ofcom’s proposed CSAM image filtering duties, for instance, would operate
on that basis.
Elevated to a general principle, however, the suggestion becomes
problematic. For offences where the conduct element is broad or vague (such as
the Section 5 offence), or where context is significant, or where the heavy
lifting of keeping the offence within proper bounds is done by the mental
element or by defences, it would be overreaching (and at serious risk of ECHR
incompatibility) automatically to deem the same item of content to be illegal
regardless of context, intention or of any other factors relevant to illegality.
In the terrorism field filtering algorithms have had trouble distinguishing
between illegal terrorist content and legal news reports of the same content. To
deem that content always to be illegal for the purpose of filtering and
takedown duties would be controversial, to say the least.
The Online Safety Network went on to comment that “the
purpose of the regime is not to punish the person sharing the content, but to
control the flow of that content.” It is true that the safety duties do not of
themselves result in criminal liability of the user. But “don’t worry, we’re
only going to suppress what you say” does not feel like the most persuasive argument for an interference with lawful freedom of expression.
[The original version of this post stated: "Since Ms Hussain’s placard was held not to be abusive, it appears that the magistrates’ court did not rule on any available defences." Now updated, with some consequential additions to the discussion of the reasonable conduct defence, in the light of Professor Augustine John's fuller account of the judge's ruling. [21 September 2024)
No comments:
Post a Comment
Note: only a member of this blog may post a comment.