The recent decision of the Australian Administrative Review Tribunal in X Corp and Elston v eSafety Commissioner illustrates the complexities that can arise when the law tasks a regulator or platform to adjudge an online post.
The decision grapples with a dilemma that is familiar,
albeit under a very different legislative regime, from the UK’s Online Safety
Act 2023. It is also features in the police takedown notice scheme for unlawful knives and other weapons content contained in the Crime and Policing Bill (currently
making its way through Parliament).
At a high level, the issue is how to achieve rapid removal
of impugned user content (typically because it is illegal under the general law
or defined as harmful in some way), while not affecting legitimate posts. The specific
challenge is that the contents of the post alone are often insufficient to determine
whether the legal line has been crossed. Contextual information, which may be
off-platform and involve investigation, is required. The Elston case provides a vivid illustration.
The twin imperatives of rapid removal and adequate investigation
of context stand in conflict with each other. A regime that requires contravention
to be adjudged solely on the contents of a post, ignoring external context, is
likely to be either ineffectual or overreaching, depending on which way the
adjudicator is required to jump in the absence of relevant information.
Australia’s Online Safety Act 2021 empowers the eSafety Commissioner,
but only following receipt of a complaint, to issue a content removal notice to
a social media platform if she is satisfied that a user’s post constitutes
cyber-abuse material targeted at an Australian adult. (In this respect the
Australian legislation resembles the UK Crime and Policing Bill more than our Online
Safety Act: Ofcom has no power under the OSA to require removal of a specific
item of user content. The Crime and Policing Bill will institute a regime of police
takedown notices for unlawful knives and other weapons content, albeit not predicated on receipt
of a complaint.)
Cyber-abuse material under the Australian Act has two key elements.
The eSafety Commissioner has to be satisfied of both before issuing a removal
notice:
Intention Element: an
ordinary reasonable person would conclude that it is likely that the material
was intended to have an effect of causing serious harm to a particular
Australian adult.
Offense Element: an
ordinary reasonable person in the position of the Australian adult would regard
the material as being, in all the circumstances, menacing, harassing or
offensive.
Serious harm is defined as serious physical harm or serious
harm to a person’s mental health, whether temporary or permanent. Serious harm
to a person’s mental health includes:
(a) serious psychological harm;
and
(b) serious distress;
but does not include mere ordinary emotional reactions such
as those of only distress, grief, fear or anger.
The need to assess what an ‘ordinary reasonable person’
would think is common to both elements. For the Intention Element the Ordinary Reasonable
Person has to determine the likely intention of the person who posted the
material. For the Offense Element, in order to determine how the material
should be regarded, the Ordinary Reasonable Person has to be put in the
position of the Australian adult putatively intended to be targeted.
The reason why the legislation hypothesises an Ordinary Reasonable
Person is to inject some objectivity into what could otherwise be an overly
subjective test.
The Tribunal observed that the Intention Element converted
what would otherwise be “a broadly available censorship tool based on emotional
responses to posted material” into a provision that “protects people from a
much narrower form of conduct where causing serious harm to a particular person
was, in the relevant sense, intended” [21]. (This has similarities to the heavy lifting done by the mental element in broadly drafted terrorism offences.)
We are in familiar legal territory with fictive characters
such as the Ordinary Reasonable Person. It is reminiscent of the fleeting
appearance of the Person of Ordinary Sensibilities in the draft UK Online
Safety Bill.
Nevertheless, as the Tribunal decision illustrates, the
attributes of the hypothetical person may need further elucidation. Those characteristics
can materially affect the balance between freedom of expression and the
protective elements of the legislation in question.
Thus, what is the Ordinary Reasonable Person taken generally
to know? What information can the Ordinary
Reasonable Person look at in deciding whether intention to cause serious harm
is likely? How likely is likely?
The information available to the Ordinary Reasonable
Person
The question of what information can, or should, be taken
into account is especially pertinent to legislation that requires moderation
decisions to be made that will impinge on freedom of expression. The Tribunal
posed the question thus:
“… whether findings on the
Intention Element should be made on an impressionistic basis after considering
a limited range of material, or whether findings should be made after careful
consideration, having regard to any evidence obtained as part of any investigation
or review process.” [45]
It found that:
“The history and structure of the
provisions suggest that while impressionistic decision-making may be authorised
in the first instance, early decisions made on limited information can and
should be re-visited both internally and externally as more information becomes
available, including as a result of input from the affected end-user.” [45]
That was against the background that:
“…the legislation as passed
allows for rapid decision making by the Commissioner to deal with material that
appears, on its face, to be within a category that the Act specified could be
the subject of a removal notice. However, once action has been taken, the
insertion of s 220A confirms that Parliament accepted that there needed to be
an opportunity for those affected by the action to have an opportunity to
address whether the material was actually within the prohibited statutory
category. External review by the Tribunal was provided for with the same end in
mind.” [44]
The UK Online Safety Act states that a platform making an illegality judgement should do so on the basis of all relevant information reasonably available to it. Ofcom guidance fleshes out what information is to
be regarded as reasonably available.
The UK Crime and Policing Bill says nothing about what
information a police officer giving an unlawful weapons content removal notice, or a
senior officer reviewing such a notice, should seek out and take into account. Nor
does it provide any opportunity for the user whose content is condemned to make
representations, or to be notified of the decision.
Generally speaking, the less information that can or should
be taken into account, the greater the likelihood of arbitrary decision-making
and consequent violation of freedom of expression rights.
In the Elston case three different variations on the
Ordinary Reasonable Person were put to the Tribunal. The eSafety Commissioner
argued that the Ordinary Reasonable Person should be limited to considering the
poster’s profile on X and the material constituting the post. The poster’s
subsequent evidence about his intention and motivations was irrelevant to
determining whether the Intention Element was satisfied. The same was said to
apply to evidence about the poster’s knowledge of the Australian person said to
be targeted. (The Tribunal observed that that would mean that even material
contained in the complaint that preceded the removal notice would be excluded
from consideration.)
As to the general knowledge of the Ordinary Reasonable
Person, the eSafety Commissioner argued that (for the purposes of the case
before the Tribunal, which concerned a post linking to and commenting on a
newspaper article about a transgender person) the Ordinary Reasonable Person
would be aware that material on X can bully individuals, would understand that
public discourse around sexuality and gender can be polarising as well as
emotionally charged; and would understand that calling a transgender man a woman
would be to act contrary to that transgender man’s wishes.
X Corp argued that the decisionmaker was entitled to have
regard to evidence (including later evidence) concerning immediate context as
at the time of the post, but not more. The facts which could be known to the
ordinary reasonable person when making their assessment included facts about
the subject of the post or the poster, what their relationship was at the time
of the post, but not evidence about what happened after.
The significance of the different positions was that on X
Corp’s case, later evidence could be taken into account to the effect that the poster did not
know, or know of, the person who was the subject of the post until he read the
newspaper article. That was not apparent from the post itself or the poster’s
profile.
Mr Elston (the poster) argued that a wide range of material
could be acquired and treated as available to the ordinary reasonable person
when asked to decide whether the material posted ‘was intended to have an
effect of causing serious harm’.
On this view of the statutory power, evidence obtained
before or after the post, during the course of the investigation and concerning
matters that occurred after the post was made, could be treated as available to
the Ordinary Reasonable Person when considering the Intention Element.
On this approach, Mr Elston’s own evidence about his
intention would be “relevant to consider, but not necessarily conclusive of
what an ordinary reasonable person would conclude about his intention.” [62]
The Tribunal agreed with Mr Elston’s approach:
“The existence of the
investigative powers available to the Commissioner and the complaint-based
nature of the power provide a powerful basis for concluding that the
Commissioner and the Tribunal should be feeding all of the available evidence
into the assessment of what the ‘ordinary reasonable person’ would conclude was
likely before determining whether the Intention Element is satisfied.” [74]
It added:
“The Parliament was concerned to
give end-users an opportunity to address claims about their conduct both on
internal review and by providing review in the Tribunal. To read the ordinary
reasonable person lens as a basis for disregarding evidence submitted by either
the complainant or the end-user or discovered by the Commissioner during an
investigation is not consistent with the fair, high quality decision-making the
Parliament made provision for.” [77]
The Tribunal then spelled out the consequences of the Commissioner’s
approach:
“…In many circumstances, including
this case, limiting the information that can be considered by the ‘ordinary reasonable
person’ to the post and closely related material, results in critical
information not being available.” [81]
It went on:
“In this case, there is no
evidence in any of the material posted and associated with the post, that the
post was ever brought to the attention of Mr Cook [the complainant]. …
That Mr Cook was aware of the
post is only discoverable by reference to the complaint submitted to the
Commissioner. If a decision maker is restricted to knowing that a post was made
to a limited audience, none of whom included Mr Cook, reaching the conclusion
that the material was intended to cause serious harm to Mr Cook is going to be
difficult. In those circumstances, where there appears to be no evidence to
which the decision maker can have regard in order to make a finding that the
post came to Mr Cook’s attention, let alone was intended to come to his
attention, a decision to issue a removal notice could not be sustained.” [81]
The Tribunal reiterated:
“In many cases, it will be the
complaint that provides critical context to allow an ordinary reasonable person
to conclude that serious harm was intended.” [81]
The Tribunal concluded that evidence about what happened
after the post was posted could be relevant if it shed light on the likely
intention of the poster. Similarly, evidence about prior behaviour of third
parties in response to certain posts could be relevant, even if it was only
discoverable by the regulator using compulsory powers:
“So long as evidence sheds light
on the statutory question, then it can and should be considered. It would be
inappropriate in advance of a particular factual scenario being presented to
the decision-maker to say that there are whole categories of evidence that
cannot be considered because the statutory test in all circumstances renders
the material irrelevant.” [87]
Nevertheless, that did not mean that the concept of the
‘ordinary and reasonable’ person had no effect:
“It moves the assessment away
from a specific factual inquiry concerning the actual thought process of the
poster and what effect they intended to achieve by the post. I must undertake a
more abstract inquiry about what an independent person (who isn’t me) would
think was the poster’s intention having regard to the available evidence.
Provided evidence is relevant to that question, then it can and should be
considered.” [89]
Whilst specific to the Australian statute and its fictive
Ordinary Reasonable Person, this discussion neatly illustrates the point that
has repeatedly been made (and often ignored): that platform judgements as to
illegality required by the UK Online Safety Act will very often require
off-platform contextual information and cannot sensibly be made on the basis of
a bare user post and profile.
The point assumes greater significance with real-time
proactive automated content moderation – something that Ofcom is proposing to extend – which by its very nature is unlikely to have access to off-platform
contextual information.
The discussion also speaks eloquently to the silence of the
Crime and Policing Bill on what kind and depth of investigation a police
officer should conduct in order to be satisfied as to the presence of unlawful
weapons content.
Likelihood of serious harm
The other significant point that the Tribunal had to
consider was what the statute meant by ‘likely’ that serious harm was intended.
The rival contentions were ‘real chance’ and ‘more probable than not’. The
Tribunal held that, in the statutory context, the latter was right. The conclusion
is notable for acknowledging the adverse consequences for freedom of expression of
adopting a lower standard:
“A finding by the ordinary
reasonable person that a person was setting out to cause serious harm to
another is a serious, adverse finding with implications for freedom of
expression. It is not the kind of finding that should be made when it is only
possible that serious harm was intended.” [119]
The standard set by the UK Online Safety Act for making
content illegality judgements is “reasonable grounds to infer”. It remains
questionable, to say the least, whether that standard is compatible with ECHR
Article 10. The Crime and Policing Bill says no more than that the police
officer must be ‘satisfied’ that the material is unlawful weapons content.
The Tribunal’s conclusion
On the facts of the case, the Tribunal concluded that an
ordinary reasonable person in the position of the complainant Mr Cook would
regard the post as offensive; but that the Intention Element was not satisfied.
That depended crucially on the broader contextual evidence:
“Read in isolation, the post
looks to be an attempt to wound Mr Cook and upset him and cause him distress,
perhaps even serious distress. If an ordinary reasonable person was only aware
of the post, then it may be open to find that the poster’s intention was likely
to be to cause serious harm to Mr Cook. However, when the broader context is
known and understood, it is difficult to read the post as intended to harm Mr
Cook, or intended to have others direct criticism towards Mr Cook or designed
to facilitate vitriol by spreading personal information about him.” [191]
Amongst the broader context was lack of evidence that the
poster intended the post to come to Mr Cook’s attention.
“For the post to do any harm it
needed to be read by Mr Cook. While I am satisfied that Mr Elston was
indifferent to whether the post did come to Mr Cook’s attention and indifferent
to whether or not it distressed him, there is no evidence to support the
conclusion that the post was made with the intention of it being brought to Mr
Cook’s attention.” [197]
Part of the reasoning behind that conclusion was that Mr
Elston’s post did not tag Mr Cook’s user handle, but only that of the World
Health Organisation (which had appointed Mr Cook to an advisory panel):
“ It is notable that Mr Elston
only included the handle for the WHO in his post and there is nothing in the
body of the post that attempts to facilitate the contacting of Mr Cook by Mr Elston’s
followers. Mr Cook’s name is not used in the body of the post.” [200]
Overall, the Tribunal concluded:
“When the evidence is considered
as a whole I am not satisfied that an ordinary reasonable person would conclude
that by making the post Mr Elston intended to cause Mr Cook serious harm. In
the absence of any evidence that Mr Elston intended that Mr Cook would receive
and read the post, and in light of the broader explanation as to why Mr Elston
made the post, I am satisfied that an ordinary reasonable person would not
conclude that that it is likely that the post was intended to have an effect of
causing serious harm to Mr Cook.” [207]
For present purposes the actual result in the Elston case matters less than the illustration that it provides of what can be involved in making judgements about removal or blocking of posts against a statutory test: whether that evaluation be done by a regulator, a platform discharging a duty imposed by statute or (in the likely future case of unlawful weapons content) the police.
No comments:
Post a Comment
Note: only a member of this blog may post a comment.