Monday 28 June 2021

On the trail of the Person of Ordinary Sensibilities

One of the more perplexing provisions of the draft Online Safety Bill is its multi-level definition of legal but harmful content (lawful but awful content, to give it its colloquial name).

The proposal that service providers’ safety duties under the Bill should apply to such content is in itself controversial, when users themselves – who are in the same position as authors of books - owe no duty of care in respect of the safety of their readers. Some campaigners have argued that the proposed service provider duties should be limited to illegal content at most.

But given that legal content is included, how has the government set about drawing a line between innocuous and harmful?

The draft Bill contains twin definitions: ‘content harmful to adults’ and ‘content harmful to children’. Since they are almost identical, I shall refer just to harmful content. Both definitions make use of a legal fiction: the adult or child “of ordinary sensibilities”. 

Baroness Bull, in the House of Lords, foresaw “endless court time being devoted to determining whether my sensibilities are more ordinary than the next person's".

Why does the draft Bill use this term? What does it mean?

Why the Person of Ordinary Sensibilities?

The first question is easier to answer than the second. The problem with trying to define harmful content is that speech is subjectively perceived and experienced. Different people respond to reading, hearing or viewing the same content in different ways. They differ as to whether they find content offensive, shocking or disturbing, they differ in their emotional response (enjoyment, distress, anger, fear, anxiety), they differ as to whether they change their views after reading, hearing or seeing it, and they differ in terms of any action that they may or may not choose to take after reading, hearing or seeing it.

Legislation based purely on subjectively perceived harm is thus liable to adopt, by default, the standard of the most easily shocked, upset or offended. Translated into service provider obligations, when assessing risk of harm on its service the service provider might have to assume a low threshold and the most sensitive user.

To counter this, an available tool is to restrict the kinds of harm that are in scope, so that (for instance) mere annoyance does not count. The draft Bill stipulates ‘physical or psychological harm’. However, psychological harm still contains a significant element of subjectivity – it is not restricted to a medical condition – and in any event there remains the issue of people’s differing susceptibilities to psychological impact.

An approach to addressing variable susceptibility is to posit a notional reader defined in objective – or at least pseudo-objective – terms (discussed in detail in section 5 of my submission to the Online Harms White Paper consultation). The law contains many examples of such legally fictional characters, from the Man on the Clapham Omnibus to the Right-Thinking Member of Society. They are intended to iron out extremes – but in order to achieve that they still need to be clothed in attributes selected by the statute, the court or both. Goddard L.J. once observed:

“Of course, different minds have different ideas as to what is moderate, and seeking for a mean, a normal, or an average where there really is no guide is very like Lord Bowen’s illustration of a blind man looking for a black hat in a dark room”. (Mills v Stanway Coaches Ltd [1940] 2 K.B. 334)

Such legally fictional characters are normally deployed as part of a process of determining liability after the event, based on ascertained facts involving known individuals, tested and argued through the adversarial court process.

By contrast, the service provider under the draft Online Harms Bill would be expected to engage in a process of predictive policing, anticipating the kinds of content that, if they were to appear on the service, the service provider would have reasonable grounds to believe satisfied the definition of harm. It would have to consider the concomitant risk posed by them; and (most probably) write an algorithm to address them.

The task that the draft Bill assigns to service providers is thus to predict, seek out, detect and then either deal with (for adult harmful content), or mitigate, manage or prevent (for various kinds of child harmful content), any number of different hypothetical black or grey hats that might or might not be present in the dark room. 

Level 1 - The Person of Ordinary Sensibilities

The legally fictional character chosen to bring some objectivity to the draft Online Harms Bill is the Person of Ordinary Sensibilities. So at Level 1 of the multi-level definition, S.46(3) defines content harmful to an adult as content the nature of which is such that “there is a material risk of the content having, or indirectly having, a significant adverse physical or psychological impact on an adult of ordinary sensibilities”. I will descend into the lower levels of the definition presently.

Antecedents

A version of the Person of Ordinary Sensibilities is found in existing law. The DCMS Minister for Digital and Culture Caroline Dinenage, in her letter of 16 June 2021 to the Lords Communications and Digital Committee, said: “This concept is already well-established in law, for example in case law concerning the tort of misuse of private information.” This refers to the judgment of the House of Lords in the Naomi Campbell case. However, there are significant differences between misuse of private information and infliction of psychological harm. Moreover, when we delve into the antecedents of the Person of Ordinary Sensibilities we find that a mutation has occurred.

The main difference from privacy is that the focus of infliction of psychological harm is on the reader of the material: the person on whom the harm is inflicted. In contrast, in the tort of misuse of private information the hypothetical Reasonable Person of Ordinary Sensibilities refers to the person whose privacy is said to have been invaded, not someone who reads the disclosed information.

The privacy test is therefore not about impact on someone who receives information. It is whether the Reasonable Person of Ordinary Sensibilities, put in the position of the person whose private information is said to have been misused, would find the disclosure offensive or objectionable. What view would that hypothetical person, put in the position of the claimant and exercising their rational faculties, take of such disclosure about their own private life?

Caution is therefore necessary in transposing the Reasonable Person of Ordinary Sensibilities from misuse of private information to psychological impact on the reader.

Must the Person of Ordinary Sensibilities be Reasonable?

Most intriguingly, somewhere on the journey from Campbell v MGN to the draft Online Safety Bill, ‘Reasonable’ has been jettisoned. 

This can be no accident since ‘reasonable’ is an integral part of the Campbell formulation, and can be traced back in turn to a 1960 US paper on Privacy by Dean William Prosser. Why would anyone take a conscious decision to strike out ‘Reasonable’? Why include the Unreasonable Person of Ordinary Sensibilities? I have given some thought to this and - on the assumption that Reasonable has indeed been omitted for a reason - I have a possible answer. Whether it is the actual explanation I do not know.

When considering whether reasonableness is relevant, recall that for inflicted harm - unlike for privacy - we are considering the impact of the information on its recipient. If you jab someone in the arm with a needle, any person of ordinary sensibilities will react autonomically in the same way (if not necessarily to the same degree): with pain and blood. There is no room for any additional concept of reasonableness, since the reaction of the person to whom it is done is not a matter of conscious decision. 

Omitting “reasonable” in the draft Bill’s formulation suggests either that the drafters of the Bill have assumed the same to be true of imparting information; or if not, that as far as the draft Bill is concerned the reasonableness of the reader’s conscious reaction is irrelevant.

We can conceive of a circumstance in which reaction to information is not a matter of conscious decision. If someone suffering from epilepsy were to encounter online content containing flashing lights, a physical reaction might be triggered. It would appear likely to fit the description of 'significant adverse physical impact'. That reaction is not in any sense a matter of voluntary choice, but a question of someone’s sensitivity to flashing lights. As with the needle in the arm, reasonableness of the reaction is simply an irrelevant concept of no application. The only relevant question is whether the sensibilities of an epilepsy sufferer should be considered to be ordinary. (More of that when we consider the Level 2 definition.)

That, it seems, is how the draft Online Harms Bill approaches the matter of reading online content, not just for physical harm but also psychological harm. It would be consistent with the phraseology “content having a significant… impact”.

One possible interpretation of the draft Bill is that only information causing an autonomic adverse psychological impact is in scope. Any kind of impact that engages the rational faculties of the reader, and to which the reasonableness of the reader’s chosen reaction is therefore a conceptual possibility, would be out of scope.

That seems very unlikely to be the government’s intention, first because the distinction (if there is one) verges on deep psychological and even philosophical questions about what is and is not a conscious reaction.  Does a Person of Ordinary Sensibilities respond automatically or make a choice in how they react emotionally to encountering, say, prejudice of various kinds? What if the question of whether the particular speech in question amounts to prejudice in the first place is contested and debated, each side regarding the other as prejudiced?

Second, such a narrow interpretation would appear to exclude from scope informational subject matter (such as misinformation) that the government plainly intends to include and is referred to elsewhere in the draft Bill.

The second (and I would say probable) interpretation is that the formulation includes situations in which the reader has a degree of conscious choice about how to react, but nevertheless the reasonableness of the reaction is to be treated as irrelevant.

There is a certain logic to that when we consider misinformation. Any potentially harmful impact of misinformation or disinformation necessarily depends on the reader believing what they are told. Deciding what to believe involves an exercise of the critical faculties. Capturing all misinformation within the definition of harmful content depends upon excluding reasonableness from the equation and including the Credulous Person of Ordinary Sensibilities within the notional reader.

To take an extreme example of the distinction between sensibilities and reasonableness, consider a post predicting that the world will end tomorrow. Would a person of ordinary sensibilities experience significant adverse psychological impact if they were to believe it? It is hard to think otherwise. At any rate there would surely be reasonable grounds for a service provider to believe that that was a material risk. Would a reasonable and well-informed person believe it? No. If reasonableness of the belief is ruled out of consideration, the Credulous Person of Ordinary Sensibilities is within scope, the end-of-the-world post falls within the definition of harmful content and is within the service provider’s safety duty.

Conversely, if reasonableness is a relevant attribute of the Person of Ordinary Sensibilities, then the more outlandish the misinformation, the less likely it would fall within scope. The service provider – in addition to all the other fiendish judgements that it is required to make - would have to distinguish between what misinformation it is reasonable and unreasonable to believe.

This is not some esoteric academic point. In the USA claims for negligent infliction of emotional distress are permitted in some states. The New Jersey Supreme Court in Williamson v Waldman limited recovery to “the fears experienced by a reasonable and well-informed person.” This was a case based on fear of contracting AIDS as a result of having been pricked by a discarded medical lancet while cleaning a trash can. The court observed:

“Therefore, as a matter of sound public policy, the standard of proximate cause should require as an element of the test of causation a level of knowledge of the causes, transmission and risks of AIDS. Such an enhanced standard will serve to overcome and discourage ignorance about the disease and its resultant social ills. Thus, the reasonableness standard should be enhanced by the imputation to the victim of emotional distress based on the fear of contracting AIDS of that level of knowledge of the disease that is then-current, accurate, and generally available to the public.”

What is a significant adverse psychological impact?

The range of possible emotional reactions to a given item of content may give rise to difficult questions.

Does our notional Person of Ordinary Sensibilities become angry, anxious, fearful or distressed when they read certain content? Is anger an adverse psychological impact? Or do only the other reactions, if they are significant, qualify as adverse? Does the service provider have to gauge, hypothetically, whether our fictional legal character would be angered or distressed by reading particular kinds of content?

Is the fact that (say) serious distress is one possible reaction of our notional Person of Ordinary Sensibilities enough to satisfy the definition and trigger the service provider’s safety duties? Does the service provider have to consider whether, the more highly charged the subject matter of a debate, it is more likely that someone will claim to be traumatised by the repugnant views of their opponent?

Physical and psychological harm are not supposed to be about taking offence or objection. On the other hand the government has said that psychological harm is not intended to be limited to medically recognised conditions. The examples of kinds of significant negative effect on the mental state of an individual that they give in the Explanatory Notes are:

feelings such as serious anxiety and fear; longer-term conditions such as depression and stress; and medically recognised mental illnesses, both short-term and permanent.

What is ‘significant’ may be a matter for debate. Does it mean serious (as the Explanatory Note suggests), or merely that it is not trivial? It is noteworthy that some US caselaw has sought to inject a standard of reasonableness into the seriousness of the emotional distress experienced: “a level of distress such that no reasonable person could be expected to endure it without undergoing unreasonable suffering”. (Williams v Tennessee National Corp.)

Level 2 - characteristics and membership of groups

Having started by saying in S.46(3) that the Person of Ordinary Sensibilities has only ordinary sensibilities, the draft Bill goes on to qualify that.  Section 46(4) provides that:

“… in the case of content which may reasonably be assumed to particularly affect people with a certain characteristic (or combination of characteristics), or to particularly affect a certain group of people, the provider is to assume that [the Person of Ordinary Sensibilities] possesses that characteristic (or combination of characteristics), or is a member of that group (as the case may be).”

To take our previous example of a sufferer from epilepsy, if their sensibilities are not Ordinary under S.46(3), they would appear to be so under S.46(4). Epilepsy seems apt to count at least as a characteristic, in which case the service provider should consider whether there is a material risk of user content with flashing lights affecting sufferers from epilepsy.

Curiously, the DCMS Minister’s letter to the Lords Committee said that “use of the term “ordinary sensibilities” is intended to make clear that the test of whether legal content is harmful does not include content that only people with an unusual sensitivity (such as a phobia) would be harmed by.” Perhaps the Minister was intending to refer only to S.46(3). If she was also including S.46(4), it is not clear to me why (say) epilepsy would not be within scope of that section.

The issues under S.46(4) become more complex when previous experience is brought into the equation.

The Lords Communications and Digital Committee asked the DCMS whether being a survivor of sexual abuse would count as a relevant characteristic. The Minister’s first comment was that it would expect Ofcom’s codes of practice and any supplementary guidance to assist service providers to fulfil their obligations in relation to any such points – which is not really to the point, since the question was about the meaning of the legislation (by which Ofcom would be bound). 

However, the Minister went on to suggest that experiences that can have a profound effect on victims should be taken into account by service providers when assessing the risk of harm posed by online content to individuals. The person of ordinary sensibilities would include someone who had had that experience. The same would apply in other cases where content could potentially give rise to a material risk of significant adverse physical or psychological impact on survivors of an experience.

One effect of this provision appears to be that if different survivors of an experience might react differently to certain content – some, perhaps, finding discussion of a difficult subject helpful and some suffering anxiety or worse – the service provider should assume the adverse reaction.

Level 3 – indirect impact on a Person of Ordinary Sensibilities

Section 46(7) defines indirect impact on a Person of Ordinary Sensibilities. S.46(7)(a) addresses the risk of content causing an individual to do or say things to a targeted adult that would have an adverse physical or psychological impact on such an adult.

In this context it is clear that the individual concerned is making a conscious choice about how to respond to content. However, the section speaks in terms of "content causing an individual to do or say things" to another adult.

The unstated premise appears to be that an individual makes no conscious decision – that reading content causes the individual to act in a certain way. However, we read and view and make decisions. We may do something or nothing. If we do something, we choose what to do.  Content does not cause a single, involuntary, Pavlovian response.

The DCMS Minister, in her letter to the Lords Communications Committee, suggested that in this instance reasonableness of the interposed individual’s response is in fact a limiting factor:

“The service provider would not have the necessary reasonable grounds to believe that there was such a risk if the content could only have such an effect by triggering an unexpected response in an unreasonable person (for example innocuous content leading to risky or violent behaviour). (emphasis added)

There is a tension between referring to a response as being 'triggered', while simultaneously considering the reasonableness of the response. 

A provision of this kind might include a reference to whether it was reasonably foreseeable that an individual would decide to take a certain kind of action as a result of reading certain kinds of content, and whether that action was reasonable. S.46(7) is silent on that. The government’s view could perhaps be that the reasonableness limitation is implicit in causation.

Level 4 – the Ultimate Demise of the Person of Ordinary Sensibilities

Section 46(6) contains a further refinement of the Person of Ordinary Sensibilities, dealing with the situation where there is a known person at whom content is directed, or who is the subject of it. At this point the Person of Ordinary Sensibilities is abandoned and replaced with the person’s own sensibilities.

Thus where the provider has knowledge, relevant to the content, about a particular person at whom content is directed, the risk of significant physical or psychological impact on that person is to be considered, taking into account any of the following known to or inferred by the provider—

(a) that person’s characteristics;

(b) that person’s membership of a certain group of people.

The effect of this section appears to be that someone who claims to be significantly and adversely psychologically impacted by particular content can put the service provider on notice. If the service provider has reasonable grounds to believe that a material risk of such impact exists, then its safety duty focuses on that person and that content. We can imagine that a service provider would be reluctant to deny the risk, once put on notice of the claim. As such, this provision appears to embody veto possibilities.

Conclusion

The draft Bill's attempt to convert subjective perception of content into an objective standard illustrates just how difficult it is to apply concepts of injury and harm to speech. The cascading levels of definition, ending up with a provision that appears to give precedence to an individual’s subjective claim to significant adverse psychological impact, will bear close scrutiny – not only in their own right, but as to how a service provider is meant to go about complying with them.

[30 June 2021. Inserted 'is to be treated as', for clarity. Deleted erroneous 'not'.]



No comments:

Post a Comment

Note: only a member of this blog may post a comment.