Monday, 22 November 2021

Licence to chill

To begin with, a confession. I should probably have paid more attention to the Law Commission’s project on reforming communications offences.  The Commission published its Final Report in July 2021, recommending new offences to replace S.127 Communications Act 2003 and the Malicious Communications Act 1988.  

Now that the government has indicated that it is minded to accept the Law Commission’s recommendations, a closer – even if 11th hour - look is called for: doubly so, since under the proposed Online Safety Bill a service provider would be obliged to take steps to remove user content if it has “reasonable grounds to believe” that the content is illegal. The two provisions would thus work hand in glove. [The Bill as introduced to Parliament omitted the "reasonable grounds to believe" threshold. It was silent as to what standard a service provider should apply to adjudge illegality. "Reasonable grounds to infer" is now being introduced by a government amendment at Report Stage.] 

There is no doubt that S.127, at any rate, is in need of reform. The question is whether the proposed replacement is an improvement. Unfortunately, that closer look suggests that the Law Commission’s recommended harm-based offence has significant problems. These arise in particular for a public post to a general audience. 

The proposed new offence

The elements of the Law Commission’s proposed new offence are:

(1) the defendant sent or posted a communication that was likely to cause harm to a likely audience;

(2) in sending or posting the communication, the defendant intended to cause harm to a likely audience; and

(3) the defendant sent or posted the communication without reasonable excuse.

(4) For the purposes of this offence:

(a) a communication is a letter, article, or electronic communication;

(b) a likely audience is someone who, at the point at which the communication was sent or posted by the defendant, was likely to see, hear, or otherwise encounter it; and

(c) harm is psychological harm, amounting to at least serious distress.

(5) When deciding whether the communication was likely to cause harm to a likely audience, the court must have regard to the context in which the communication was sent or posted, including the characteristics of a likely audience.

(6) When deciding whether the defendant had a reasonable excuse for sending or posting the communication, the court must have regard to whether the communication was, or was meant as, a contribution to a matter of public interest.

The Law Commission goes on to recommend that “likely” be defined as “a real or substantial risk”. This requires no further explanation for “likely to cause harm”. For “a likely audience”, it would mean a real or substantial risk of seeing, hearing, or otherwise encountering the communication. (Report [2.119])

Psychological harm

The challenge for any communications offence based on harm to a reader is how to reconcile the need for an objective rule governing speech with the subjectivity of how speech is perceived. The cost of getting it wrong is that we end up with a variation on the heckler’s veto: speech chilled by fear of criminal liability arising from the bare assertion of a claim to have suffered harm.

The focus on likelihood of ‘psychological harm’ as the criterion for the recommended offence has provoked criticism on grounds of subjectivity. It is notorious that protagonists in controversial areas of debate may claim to be traumatised by views with which they are in deep disagreement. The very kinds of speech that are meant to have the greatest freedom of of expression protections – political and religious – are perhaps those in relation to which that kind of claim is most likely to be made.

The Law Commission would argue that the recommended offence revolves around whether relevant harm is likely to be caused to someone likely to encounter the communication in question (the ‘conduct element’ of the offence). Harm has to be both likely and serious. A bare claim to have suffered harm would therefore not of itself demonstrate that harm was likely or serious, since a complainant might be unforeseeably sensitive. Additionally, the prosecution would have to show that the communication was made without reasonable excuse and that the defendant intended to harm someone likely to encounter the communication.

Kinds of audience

The Law Commission stresses that the offence would focus on the factual context within which the communication took place. Thus, the likelihood of a private communication sent to one person causing harm would be adjudged most obviously according to the characteristics of the intended recipient. If there was a real or substantial risk that the intended recipient would suffer harm, then (whether or not the intended recipient actually suffered harm) the conduct element would be made out. Further, if it was likely (at the point of sending the communication) that someone other than the intended recipient would also see the communication, then it would be relevant to consider whether that other person would be likely to suffer harm from doing so, taking into account their characteristics.

A similar analysis would apply to a group of readers. A post to a forum dedicated to disability issues would be likely to be read by people with disabilities. That characteristic would be taken into account, with the result that a likely audience would be likely to be caused serious distress by a hate post about disabled people. The Law Commission Consultation Paper applies that logic to the example of a tweet directed to a well-known disability charity by means of the ‘@’ function. The likely audience would primarily be the charity and its followers, many of whom could be assumed to have a disability.

How, though, should this analysis be applied to a public post to a general audience? What would be the relevant characteristics of a likely audience? How are those to be determined when no particular kind of individual is especially likely to encounter the post?

Does the general nature of the audience mean that the risk of satisfying the conduct element is reduced, because no particular relevant characteristics of an audience can be identified? Or is the risk increased, as the larger the audience the more likely it is to contain at least one person with characteristics such that they are likely to suffer harm? Since the draft offence refers to ‘someone’, one likely person appears to be sufficient to amount to a likely audience. The Consultation Paper at [5.124], discussing ‘likely audience’ in the context of the then proposed mental element of the offence, adopts that position.

The Law Commission Report does not fully address the question of the characteristics of a general audience. It responded to submissions raising concerns on the question of public posts by rejecting suggestions that a “reasonable person” standard should be applied, on the basis that sufficient protection was provided by the requirement of intent to harm and the need to prove lack of reasonable excuse.

Actual or hypothetical audience?

The uncertainty about the position of public posts to a general audience is exacerbated by lack of clarity over whether the conduct element of the offence requires proof that someone likely to encounter the communication actually did so (in which case the court’s analysis would presumably tend to be focused on the characteristics of the person shown to have encountered it, and the likelihood of their being harmed as a result); or whether it would be sufficient to rely on the mere likelihood of someone encountering it (in which case the court would appear to have to decide what characteristics to attribute to a hypothetical likely member of the audience).

If the latter, then at least for a public post to a general audience the relevant factual context - a feature of the proposed offence on which the Law Commission places considerable reliance -  would seem, as regards the characteristics of the hypothetical person likely to suffer harm, to have to be constructed in the minds of the judge or jury. 

The Law Commission states that the proposed offence is complete, both for likely harm and likely audience, at the point of sending the communication (Rep 2.56, 2.91, 2.117). On that logic it should not matter if no-one can be shown actually to have been harmed or actually to have encountered the communication. Proof of likelihood should suffice for both.      

The Law Commission also says (Rep 2.256) that:

“where a communication was sent or posted from a device to a social media platform, but was not made visible by that platform (perhaps because of preventative algorithms), it could be impossible for the offence to be made out because the prosecution would have to prove that there was a likely audience who was at a real and substantial risk of seeing the message. It might be that no one was at a real or substantial risk of seeing the communication (i.e. the likely audience was nobody).”

If the offence is complete at the point of sending, and if sending is the point at which the likely audience is to be determined, what would be the relevance of the post subsequently being blocked by the platform upon receipt? Does the likelihood of the post being blocked have to be considered? So could the offence still be committed if the post was unlikely to be blocked, but in fact was? Or, conversely, would the offence not be committed if the post was likely to be blocked, but slipped through? 

Such conundrums apart, the more hypothetical the conduct element of the offence, the more significant is the Law Commission’s rejection of a “reasonable person” when considering likelihood of harm. It leaves open the possibility that a notional member of a likely audience could foreseeably be someone of unusual, or even extreme, sensitivity.

Whether the likely audience member contemplated by the offence is actual or notional, as already noted the Law Commission’s intention appears to be that it would suffice if one person in the audience were likely to encounter the communication and likely to suffer harm as a result.

The question of whether the actual presence of someone in the audience has to be proved finds a parallel in offences under the Public Order Act 1986. These differ as to whether they require that a real person could have heard the relevant words, or simply that a hypothetical person could have done so. Thus for S.5(1) Public Order Act physical presence matters: were the words used “within the hearing or sight of a person” likely to be caused harm? The presence of an actual person likely to be caused harm has to be proved; but it does not have to be proved that such person actually heard the words or suffered harm. If the person present did hear them, the likelihood of their suffering relevant harm is judged according to their relevant characteristics. Thus a police officer may be regarded as possessing more fortitude than an ordinary member of the public.

In contrast, the offences of riot, affray and violent disorder under the Public Order Act are all expressly framed by reference to the effect of the conduct on a notional person of reasonable firmness hypothetically present at the scene; with no requirement that such a person be at, or be likely to be at, the scene.

Universal standards

One of the main criticisms of the existing law is that the supposedly objective categories of speech laid down (such as ‘grossly offensive’) are so vague as to be unacceptably subjective in their application by prosecution and the courts. The Law Commission endorses that criticism.  It rejects as unworkable universal standards for categories of speech, in favour of a factually context-specific harm-based approach.  

Yet a completely hypothetical interpretation of the Law Commission’s proposed offence could require the court to carry out an exercise – attributing characteristics to a notional member of a general audience - as subjective as that for which the existing offences (or at least s.127) are rightly criticised.

The Law Commission emphasises that “likely” harm means a “real or substantial risk”, not a mere risk or possibility. But if the assumed victim is a notional rather than an actual member of a general audience where does that lead, if not into the forbidden territory of inviting the court to divine universal standards: a set of attributes with which a notional member of the audience has to be clothed?

Claims to have suffered actual harm

The converse of the Law Commission’s emphasis on “likely harm” is that if someone claims to have suffered harm from encountering the communication, or indeed proves that they actually have done so, that should not be conclusive.

In practice, as the Law Commission has acknowledged, evidence of actual harm to an actual person may count towards likelihood of harm (but may not be determinative). (Consultation Paper [5.90])

Thus the Law Commission states that “the mere fact that someone was harmed does not imply that harm was likely … the jury or magistrate will have to determine as a matter of fact that, at the point of sending, harm was likely. If a person has an extreme and entirely unforeseeable reaction, the element of likely harm will not be satisfied.” (Report [2.107])

However, the Law Commission has also rejected the suggestion that a reasonableness standard should be applied. The result appears to be that if one person of unusual sensitivity, sufficient to be at real or substantial risk of harm, is foreseeably likely to encounter the communication, then the “likely audience” requirement would be satisfied. Hence the significance of the possible argument that the larger the audience of a public post, the more likely that it may contain such a person.

Insertion into an audience

At the level of practical consequences, whichever interpretation of the proposed offence is correct – actual or hypothetical likely audience member – it appears to provide a route for someone to attempt to criminalise someone else’s controversial views by inserting themselves into a likely audience.  The Law Commission accepted the possibility of this tactic (Report [2.153]), but considered that other elements of the offence (the need to prove lack of reasonable excuse and intent to harm) would constitute sufficient protection from criminalisation.

However, whilst it discussed how a court might approach the matter, the Report did not address in detail the possible deterrent effect on continued communication, nor the interaction with the illegality provisions of the draft Online Safety Bill.

How might the tactic work? Let us assume a social media post to a general audience, not about any one person, but expressing views with which others may profoundly disagree – whether the subject matter be politics, religion, or any other area in which some may claim to be traumatised by views that they find repugnant.

Would such a communication be at risk of illegality if the audience is likely to contain someone who would find what was said severely distressing? The Law Commission’s answer is ‘No’: not because one sensitive person in a general audience is not enough, but first of all because the necessary intent to cause severe distress to a likely audience member would be lacking; and second, because ordinary (even if highly contentious) political discourse should count as a contribution to a matter of public interest (Consultation Paper [5.185] – [5.187], Report [2.152] – [2.153]).

Nevertheless, it would be an easy matter for someone who objects to the contents of the post to seek to put further communications at risk by entering the conversation. One reply from someone who claims to be severely distressed by the views expressed could create an increased risk (actual or perceived) of committing the offence if the views were to be repeated.

That would be the case whether ‘likely audience’ requires the presence of an actual or hypothetical audience member. If it requires a foreseeable actual audience member, one has now appeared. It could hardly be suggested that, for the future, their presence is not foreseeable. The question for the conduct element would be whether, as claimed, they would be likely to be harmed.

If, on the other hand, the “likely audience” is entirely hypothetical, would an intervention by a real person claiming to be harmed make any difference? There are two reasons to think that it could:

1.      If there were any doubt that it was foreseeable that the audience is likely to contain someone with that degree of sensitivity, that doubt is dispelled.

2.     In practice, as the Law Commission has acknowledged, evidence of actual harm to an actual person may count towards likelihood of harm (but may not be determinative).

On either interpretation of the offence, any further communications would be with knowledge of the audience member and their claim to have been harmed. That would create a more concrete factual context for an argument that likely harm resulting from any further communications was intentional.

Of course, if a further communication were to be prosecuted and go to trial it still might not amount to an offence. The context would have to be examined. Serious distress might not be established.  The prosecution might not be able to prove lack of reasonable excuse. Intent to harm might still not be established.

But that is not really the significant issue where chilling effect is concerned. Rational apprehension of increased risk of committing an offence, by virtue of crystallisation of a likely audience and the claim to harm, would be capable of creating a chilling effect on further communications

The Law Commission may view the need to prove lack of reasonable excuse and intent to harm as fundamental to a court’s consideration. However, someone told that their potential criminal liability for future posts rests on those two criteria might, rationally, see things less diffidently.

If insertion into the audience has not chilled further communication, a further tactical step could be to notify the platform and assert that they have reasonable grounds to believe the continuing posts are illegal. Reasonable grounds (not actual illegality, manifest illegality or even likely illegality) is the threshold that would trigger the platform’s duty to take the posts down swiftly under S.9(3)(d) of the draft Online Safety Bill.

Conclusion

The Law Commission’s proposal draws some inspiration from legislation enacted in 2015 in New Zealand. That, too, is contextual and harm-based. However, the New Zealand offence is firmly anchored in actual harm to an actual identifiable person at whom the communication was targeted, and is qualified by an ‘ordinary reasonable person’ provision. The Law Commission has cut its recommended offence adrift from those moorings. 

That has significant consequences for the scope of the conduct element of the offence, especially when applied to public posts to a general audience. The structure of the conduct element also lends itself to tactical chilling of speech. It is questionable whether these concerns would be sufficiently compensated by the requirement to prove intent to harm and lack of reasonable excuse.

[Unintended negative at end of section 'Psychological harm' corrected 4 Dec 2021; Updated 29 April 2022 to note omission of "reasonable grounds to believe" in Bill as introduced to Parliament; and 11 July 2022 to note introduction of "reasonable grounds to infer" by proposed government amendment at Report Stage.]



Wednesday, 3 November 2021

The draft Online Safety Bill concretised

A.    Introduction 

1.       The draft Online Safety Bill is nothing if not abstract. Whether it is defining the adult (or child) of ordinary sensibilities, mandating proportionate systems and processes, or balancing safety, privacy, and freedom of speech within the law, the draft Bill resolutely eschews specifics.  

2.      The detailing of the draft Bill’s preliminary design is to be executed in due course by secondary legislation, with Ofcom guidance and Codes of Practice to follow. Even at that point, there is no guarantee that the outcome would be clear rules that would enable a user to determine on which side of the safety line any given item of content might fall.

3.      Notwithstanding its abstract framing, the impact of the draft Bill (should it become law) would be on individual items of content posted by users. But how can we evaluate that impact where legislation is calculatedly abstract, and before any of the detail is painted in?

4.      We have to concretise the draft Bill’s abstractions: test them against a hypothetical scenario and deduce (if we can) what might result. This post is an attempt to do that.

B.    A concrete hypothetical

Our scenario concerns an amateur blogger who specialises in commenting on the affairs of his local authority. He writes a series of blogposts (which he also posts to his social media accounts) critical of a senior officer of the local authority, who has previously made public a history of struggling with mental health issues. The officer says that the posts have had an impact on her mental health and that she has sought counselling.

5.      This hypothetical scenario is adapted from the Sandwell Skidder case, in which a council officer brought civil proceedings for harassment under the Protection from Harassment Act 1997 against a local blogger, a self-proclaimed “citizen journalist”.

6.      The court described the posts in that case, although not factually untrue, as a “series of unpleasant, personally critical publications”. It emphasised that nothing in the judgment should be taken as holding that the criticisms were justified. Nevertheless, and not doubting what the council officer said about the impact on her, in a judgment running to 92 paragraphs the court held that the proceedings for harassment stood no reasonable prospect of success and granted the blogger summary judgment.

7.       In several respects the facts and legal analysis in the Sandwell Skidder judgment carry resonance for the duties that the draft Bill would impose on a user to user (U2U) service provider:

a.       The claim of impact on mental health.

b.      The significance of context (including the seniority of the council officer, the council officer’s own previous video describing her struggle with mental health issues; and the legal requirement for there to have been more than a single post by the defendant).

c.       The defendant being an amateur blogger rather than a professional journalist (the court held that the journalistic nature of the blog was what mattered, not the status of the person who wrote it).

d.      The legal requirement that liability for harassment should be interpreted by reference to Art 10 ECHR.

e.       The significance for the freedom of expression analysis of the case being one of publication to the world at large.

f.        The relevance that similar considerations would have to the criminal offence of harassment under the 1997 Act.

8.      Our hypothetical potentially requires consideration of service provider safety duties for illegality and (for a Category 1 service provider) content harmful to adults. (Category 1 service providers would be designated on the basis of being high risk by reason of size and functionality.)

9.      The scenario would also engage service provider duties in respect of some or all of freedom of expression, privacy, and (for a Category 1 service provider) journalistic content and content of democratic importance.

10.   We will assume, for simplicity, that the service provider in question does not have to comply with the draft Bill’s “content harmful to children” safety duty.

C.     The safety duties in summary

11.    The draft Bill’s illegality safety duties are of two kinds: proactive/preventative and reactive.

12.   The general proactive/preventative safety duties under S.9(3)(a) to (c) apply to priority illegal content designated as such by secondary legislation. Although these duties do not expressly stipulate monitoring and filtering, preventative systems and processes are to some extent implicit in e.g. the duty to ‘minimise the presence of priority illegal content’.

13.   It is noteworthy, however, that an Ofcom enforcement decision cannot require steps to be taken “to use technology to identify a particular kind of content present on the service with a view to taking down such content” (S.83(11)).

14.   Our hypothetical will assume that criminally harassing content has been designated as priority illegal content.

15.   The only explicitly reactive duty is under S.9(3)(d), which applies to all in-scope illegal content. The duty sits alongside the hosting protection in the eCommerce Directive, but cast as a positive obligation to remove in-scope illegal content upon gaining awareness of the presence of illegal content, rather than (as in the eCommerce Directive) exposing the provider to potential liability under the relevant substantive law. The knowledge threshold appears to be lower than that in the eCommerce Directive.

16.   There is also a duty under S.9(2), applicable to all in-scope illegality, to take “proportionate steps to mitigate and effectively manage” risks of physical and psychological harm to individuals. This is tied in some degree to the illegal content risk assessment that a service provider is required to carry out. For simplicity, we shall consider only the proactive and reactive illegality safety duties under S.9(3).

17.   Illegality refers to certain types of criminal offence set out in the draft Bill. They would include the harassment offence under the 1997 Act.

18.   The illegality safety duties apply to user content that the service provider has reasonable grounds to believe is illegal, even though it may not in fact be illegal. As the government has said in its Response to the House of Lords Communications and Digital Committee Report on Freedom of Expression in the Digital Age:

Platforms will need to take action where they have reasonable grounds to believe that content amounts to a relevant offence. They will need to ensure their content moderation systems are able to decide whether something meets that test.”

19.   That, under the draft Bill’s definition of illegal content, applies not only to content actually present on the provider’s service, but to kinds of content that may hypothetically be present on its service in the future.

20.  That would draw the service provider into some degree of predictive policing. It also raises questions about the level of generality at which the draft Bill would require predictions to be made and how those should translate into individual decisions about concrete items of content.

21.   For example, would a complaint by a known person about a known content source that passed the ‘reasonable grounds’ threshold concretise the duty to minimise the presence of priority illegal content? Would that require the source of the content, or content about the complainant, to be specifically targeted by minimisation measures? This has similarities to the long running debate about ‘stay-down’ obligations on service providers.

22.  The question of the required level of generality or granularity, which also arises in relation to the ‘content harmful to adults’ duty, necessitates close examination of the provisions defining the safety duties and the risk assessment duties upon which some aspects of the safety duties rest. It may be that there is not meant to be one answer to the question; that it all comes down to proportionality, Ofcom guidance and Codes of Practice.  However, even taking that into account, some aspects remain difficult to fit together satisfactorily. If there is an obvious solution to those, no doubt someone will point me to it.

23.  The “content harmful to adults” safety duty requires a Category 1 service provider to make clear in its terms and conditions how such content would be dealt with and to apply those terms and conditions consistently. There is a question, on the wording of the draft Bill, as to whether a service provider can state that ‘we do nothing about this kind of harmful content’. The government’s position is understood to be that that would be permissible.

24.  The government’s recent Response to the Lords Digital and Communications Committee Report on Freedom of Expression in the Digital Age says:

“Where harmful misinformation and disinformation does not cross the criminal threshold, the biggest platforms (Category 1 services) will be required to set out what is and is not acceptable on their services, and enforce the rules consistently. If platforms choose to allow harmful content to be shared on their services, they should consider other steps to mitigate the risk of harm to users, such as not amplifying such content through recommendation algorithms or applying labels warning users about the potential harm.”

25.  If the government means that considering those “other steps” forms part of the Category 1 service provider’s duty, it is not obvious from where in the draft Bill that might stem.

26.  In fulfilling any kind of safety duty under the draft Bill a service provider would be required to have regard to the importance of protecting users’ right to freedom of expression within the law. Similarly it has to have regard to the importance of protecting users from unwarranted infringements of privacy. (Parenthetically, in the Sandwell Skidder case privacy was held not to be a significant factor in view of the council officer’s own previous published video.)

27.  Category 1 providers would be under further duties to take into account the importance of journalistic content and content of democratic importance when making decisions about how to treat such content and whether to take action against a user generating, uploading or sharing such content.

D.    Implementing the illegality safety duties

Proactive illegality duties: S.9(3)(a) to (c)

28.  We have assumed that secondary legislation has designated criminally harassing content as priority illegal content. The provider has to have systems and processes designed to minimise the presence of priority illegal content, the length of time for which it is present, and the dissemination of such content. Those systems could be automated, manual or both.

29.  That general requirement has to be translated into an actual system or process making actual decisions about actual content. The system would (presumably) have to try to predict the variety of forms of harassment that might hypothetically be present in the future, and detect and identify those that pass the illegality threshold (reasonable grounds to believe that the content is criminally harassing).

30.  Simultaneously it would have to try to avoid false positives that would result in the suppression of user content falling short of that threshold. That would seem to follow from the service provider’s duty to have regard to the importance of protecting users’ right to freedom of expression within the law. For Category 1 service providers that may be reinforced by the journalistic content and content of democratic importance duties. On the basis of the Sandwell Skidder judgment our hypothetical blog should qualify at least as journalistic content.

31.   What would that involve in concrete terms? First, the system or process would have to understand what does and does not constitute a criminal offence. That would apply at least to human moderators. Automated systems might be expected to do likewise. The S.9(3) duty makes no distinction (albeit there appears to be tension between the proactive provisions of S.9(3) and the limitation on Ofcom’s enforcement power in S.83(11) (para 13 above)).

32.  Parenthetically, where harassment is concerned not only the offence under the 1997 Act might have to be understood. Hypothetical content could also have to be considered under any other potentially applicable offences - the S.127 Communications Act offences, say (or their possible replacement by a ‘psychological harm’ offence as recommended by the Law Commission); and the common law offence of public nuisance or its statutory replacement under the Policing Bill currently going through Parliament.

33.  It is worth considering, by reference to some extracts from the caselaw, what understanding the 1997 Act harassment offence might involve:

  •           There is no statutory definition of harassment. It “was left deliberately wide and open-ended” (Majrowski v Guy’s and Thomas’s NHS Trust [2006] ICR 1999)

  •           The conduct must cross “the boundary from the regrettable to the unacceptable” (ibid)

  •           “… courts will have in mind that irritations, annoyances, even a measure of upset, arise at times in everybody’s day-to-day dealings with other people. Courts are well able to recognise the boundary between conduct which is unattractive, even unreasonable, and conduct which is oppressive and unacceptable” (ibid)

  •           Reference in the Act to alarming the person or causing distress is not a definition; it is merely guidance as to one element. (Hayes v Willoughby [2013] 1 WLR 935).

  •           “It would be a serious interference with freedom of expression if those wishing to express their own views could be silenced by, or threatened with, claims for harassment based on subjective claims by individuals that they feel offended or insulted.” (Trimingham v Associated Newspapers Ltd [2012] EWHC 1296)

  •           “When Article 10 [ECHR] is engaged then the Court must apply an intense focus on the relevant competing rights… . Harassment by speech cases are usually highly fact- and context-specific.” (Canada Goose v Persons Unknown [2019] EWHC 2459)

  •           “The real question is whether the conduct complained of has extra elements of oppression, persistence and unpleasantness and therefore crosses the line… . There may be a further question, which is whether the content of statements can be distinguished from their mode of delivery.” (Merlin Entertainments v Cave [2014] EWHC 3036)

  •           “[P]ublication to the world at large engages the core of the right to freedom of expression. … In the social media context it can be more difficult to distinguish between speech which is “targeted” at an individual and speech that is published to the world at large.” (McNally v Saunders [2021] EWHC 2012)

34.  Harassment under the 1997 Act is thus a highly nuanced concept - less of a bright line rule that can be translated into an algorithm and more of an exercise in balancing different rights and interests against background factual context – something that even the courts do not find easy.

35.  For the harassment offence the task of identifying criminal content on a U2U service is complicated by the central importance of context and repetition. The potential relevance of external context is illustrated by the claimant’s prior published video in the Sandwell Skidder case. A service provider’s systems are unlikely to be aware of relevant external context.

36.  As to repetition, initially lawful conduct may become unlawful as the result of the manner in which it is pursued and its persistence. That is because the harassment offence requires, in the case of conduct in relation to a single person, conduct on at least two occasions in relation to that person. That is a bright line rule. One occasion is not enough.

37.   It would seem, therefore, to be logically impossible for a proactive moderation system to detect a single post and validly determine that it amounts to criminal harassment, or even that there are reasonable grounds to believe that it does. The system would have to have detected and considered together, or perhaps inferred the existence of, more than one harassing post.

38.  The court in the Sandwell Skidder case devoted 92 paragraphs of judgment to describing the facts, the law, and weighing up whether the Sandwell Skidder’s posts amounted to harassment under the 1997 Act. That luxury would not be available to the proactive detection and moderation systems apparently envisaged by the draft Bill, at least to the extent that - unlike a court - they would have to operate at scale and in real or near-real time.

Reactive illegality duty: S.9(3)(d)

39.  The reactive duty on the service provider under S.9(3)(d) is to have proportionate systems and processes in place designed to: “where [it] is alerted by a person to the presence of any illegal content, or becomes aware of it in any other way, swiftly take down such content”.

40.  Let us assume that the service provider’s proactive illegality systems and processes have not already suppressed references to our citizen journalist’s blogposts. Suppose that, instead of taking a harassment complaint to court, the subject of the blogposts complains to the service provider. What happens then?

41.   In terms of knowledge and understanding of the law of criminal harassment, nothing differs from the proactive duties. From a factual perspective, the complainant may well have provided the service provider with more context as seen from the complainant’s perspective.

42.  As with the proactive duties, the threshold that triggers the reactive takedown duty is not awareness that the content is actually illegal. If there are reasonable grounds to believe that use or dissemination of the content amounts to a relevant criminal offence, the service provider is positively obliged to have a system or process designed to take it down swiftly.

43.  At the same time, however, it is required to have regard to the importance of freedom of expression within the law (and, if a Category 1 service provider, to take into account the importance of journalistic content and content of democratic importance).

44.  Apart from the reduced threshold for illegality the exercise demanded of a service provider at this point is essentially that of a court. The fact that the service provider might not be sanctioned by the regulator for coming to an individual decision which the regulator did not agree with (see here) does not detract from the essentially judicial role that the draft Bill would impose on the service provider. 

E.     Implementing the ‘content harmful to adults’ safety duty

45.  Category 1 services would be under a safety duty in respect of ‘content harmful to adults’.

46.  What is ‘content harmful to adults’? It comes in two versions: priority and non-priority. The Secretary of State is able (under a peculiar regulation-making power that on the face of it is not limited to physical or psychological harm) to designate harassing content (whether or not illegal) as priority content harmful to adults.

47.  Content is non-priority content harmful to adults if its nature is such that “there is a material risk of the content having, or indirectly having, a significant adverse physical or psychological impact on an adult of ordinary sensibilities”.  A series of sub-definitions drills down to characteristics and sensibilities of groups of people, and then to those of known individuals. Non-priority content harmful to adults cannot also be illegal content (S.46(8)(a)).

48.  Whether the content be priority or non-priority, the Category 1 service provider has to explain clearly and accessibly in its terms and conditions how it would deal with actual content of that kind; and then apply those terms and conditions consistently (S.11).

49.  As already mentioned (para 23), the extent of the ‘content harmful to adults’ duty is debatable. ‘How’ could imply that such content should be dealt with in some way. The government’s intention is understood to be that the duty is transparency-only, so that the service provider is free to state in its terms and conditions that it does nothing about such content.

50.  Even on that basis, the practical question arises of how general or specific the descriptions of harmful content in the terms and conditions have to be. Priority content could probably be addressed at the generic level of kinds of priority content designated in secondary legislation. Whether our hypothetical blogpost would fall within any of those categories would depend on how harassing content had been described in secondary legislation – for instance, whether a course of conduct was stipulated, as with the criminal offence.

51.   The question of level of generality is much less easy to answer for non-priority content. For instance, the element of the ‘non-priority content harmful to adults’ definition that concerns known adults appears to have no discernible function in the draft Bill unless it in some way affects the Category 1 service provider’s ‘terms and conditions’ duty. Yet if it does have an effect of that kind, it is difficult to see what that could be intended to be.

52.  The fictional character of the “adult of ordinary sensibilities” (see here for a detailed discussion of this concept and its antecedents) sets out initially to define an objective standard for adverse psychological impact (albeit the sub-definitions progressively move away from that). An objective standard aims to address the problem of someone subjectively claiming to have suffered harm from reading or viewing material. That carries the risk of embedding the sensitivities of the most easily offended reader.

53.  For non-priority content harmful to adults, the S.11 duty kicks in if harassing content has been identified as a risk in the “adult’s risk assessment” that a Category 1 service provider is required to undertake. As with illegal content, content harmful to adults includes content hypothetically present on the system.

54.  This relationship creates the conundrum that the higher the level of abstraction at which the adults’ risk assessment is conducted, the greater the gap that has to bridged when translating to actual content; alternatively, if risk assessment is conducted at a more granular and concrete level, for instance down to known content sources and known individuals who are the subject of online content, it could rapidly multiply into unfeasibility.

55.   So, what happens if the Category 1 service provider is aware of a specific blog, or of specific content contained in a blog, or of a specific person who is the subject of posts in the blog, that has been posted to its service? Would that affect how it had to fulfil its duties in relation to content harmful to adults?

56.  Take first a known blog and consider the service provider’s transparency duty. Does the service provider have to explain in its terms and conditions how content from individually identified user sources is to be dealt with? On the face of it that would appear to be a strange result. However, the transparency duty and its underlying risk assessment duty are framed by means of an uneasy combination of references to ‘kind’ of content and ‘content’, which leaves the intended levels of generality or granularity difficult to discern.

57.   The obvious response to this kind of issue may be that a service provider is required only to put in place proportionate systems and processes. That, however, provides no clear answer to the concrete question that the service provider would face: do I have to name any specific content sources in my terms and conditions and explain how they will be dealt with; if so, how do I decide which?

58.  Turning now to a known subject of a blog, unlike for known content sources the draft Bill contains some specific, potentially relevant, provisions. It expressly provides that where the service provider knows of a particular adult who is the subject of user content on its service, or to whom it knows that such content is directed, it is that adult’s sensibilities and characteristics that are relevant. The legal fiction of the objective adult of ordinary sensibilities is replaced by the actual subject of the blogpost.

59.  So in the case of our hypothetical blog, once the council officer complains to the service provider, the service provider knows of the complainant’s identity and also, crucially, knows of the assertion that they have suffered psychological harm as a result of the content on their service.

60.  The service provider’s duty is triggered not by establishing actual psychological harm, but by reasonable grounds to believe that there is a material risk of the content having a significant adverse physical or psychological impact. Let us assume that the service provider has concluded that its ‘harmful to adults’ duty is at least arguably triggered. What does the service provider have to do?

61.   As with a known blog or blogpost, focusing the duty to the level of a known person raises the question: does the service provider have to state in its terms and conditions how posts about, or directed at, that named person will be dealt with? Does it have to incorporate a list of such known persons in its terms and conditions? It is hard to believe that that is the government’s intention. Yet combining the Category 1 safety duty under S.11(2)(b) with the individualised version of the 'adult of ordinary sensibilities' appears to lean in that direction.

62.  If that is not the consequence, and if the Category 1 duty in relation to content harmful to adults is ‘transparency-only’, then how (if at all) would the ‘known person’ provision of the draft Bill affect what the service provider is required to do? What function does it perform? If the ‘known person’ provision does have some kind of substantive consequence, what might that be? That may raise the question whether someone who claims to be at risk of significant adverse psychological impact from the activities of a blogger could exercise some degree of personal veto or some other kind of control over dissemination of the posts.

63.  Whatever the answer may be to the difficult questions that the draft Bill poses, what it evidently does do is propel service providers into a more central role in determining controversies: all in scope service providers where a decision has to be made as to whether there are reasonable grounds to believe that the content is illegal, or presents a material risk of serious adverse psychological impact on an under-18; and Category 1 service providers additionally for content harmful to adults.  



Monday, 1 November 2021

The draft Online Safety Bill: systemic or content-focused?

One of the more intriguing aspects of the draft Online Safety Bill is the government’s insistence that the safety duties under the draft Bill are not about individual items of content, but about having appropriate systems and processes in place; and that this is protective of freedom of expression.

Thus in written evidence to the Joint Parliamentary Committee scrutinising the draft Bill the DCMS said:

“The regulatory framework set out in the draft Bill is entirely centred on systems and processes, rather than individual pieces of content, putting these at the heart of companies' responsibilities.

The focus on robust processes and systems rather than individual pieces of content has a number of key advantages. The scale of online content and the pace at which new user-generated content is uploaded means that a focus on content would be likely to place a disproportionate burden on companies, and lead to a greater risk of over-removal as companies seek to comply with their duties. This could put freedom of expression at risk, as companies would be incentivised to remove marginal content. The focus on processes and systems protects freedom of expression, and additionally means that the Bill’s framework will remain effective as new harms emerge.

The regulator will be focused on oversight of the effectiveness of companies’ systems and processes, including their content moderation processes. The regulator will not make decisions on individual pieces of content, and will not penalise companies where their moderation processes are generally good, but inevitably not perfect.”   

The government appears to be arguing that since a service provider would not automatically be sanctioned for a single erroneous removal decision, it would tend to err on the side of leaving marginal content up. Why such an incentive would operate only in the direction of under-removal, when the same logic would apply to individual decisions in either direction, is unclear.

Be that as it may, elsewhere the draft Bill hardwires a bias towards over-removal into the illegal content safety duty: by setting the threshold at which the duty bites at ‘reasonable grounds to believe’ that the content is illegal, rather than actual illegality or even likelihood of illegality.

The government’s broader claim is that centreing duties on systems and processes results in a regulatory regime that is not focused on individual pieces of content at all. This claim merits close scrutiny.

Safety duties, in terms of the steps required to fulfil them, can be of three kinds: 

  • Non-content. Duties with no direct effect on content at all, such as a duty to provide users with a reporting mechanism.
  • Content-agnostic. This is a duty that is independent of the kind of content involved, but nevertheless affects users’ content. By its nature a duty that is unrelated to (say) the illegality or harmfulness of content will tend to result in steps being taken (‘friction’ devices, for instance, or limits on reach) that would affect unobjectionable or positively beneficial content just as they affect illegal or legal but harmful content.
  • Content-related. These duties are framed specifically by reference to certain kinds of content: in the draft Bill, illegal, harmful to children and harmful to adults. Duties of this kind aim to affect those kinds of content in various ways, but carry a risk of collateral damage to other content.

In principle a content-related duty could encompass harm caused either by the informational content itself, or by the manner in which a message is conveyed. Messages with no informational content at all can cause harm: repeated silent telephone calls can instil fear or, at least, constitute a nuisance; flashing lights can provoke an epileptic seizure.  

The government’s emphasis on systems and processes to some extent echoes calls for a ‘systemic’ duty of care. To quote the Carnegie UK Trust’s evidence to the Joint Scrutiny Committee, arguing for a more systemic approach:

“To achieve the benefits of a systems and processes driven approach the Government should revert to an overarching general duty of care where risk assessment focuses on the hazards caused by the operation of the platform rather than on types of content as a proxy for harm.” 

A systemic duty would certainly include the first two categories of duty: non-content and content-agnostic.  It seems inevitable that a systemic duty would also encompass content-related duties. While steps taken pursuant to a duty may range more broadly than a binary yes/no content removal decision, that does not detract from the inevitable need to decide what (if any) steps to take according to the kind of content involved.

Indeed it is notable how rapidly discussion of a systemic duty of care tends to move on to categories of harmful content, such as hate speech and harassment. Carnegie’s evidence, while criticising the draft Bill’s duties for focusing too much on categories of content, simultaneously censures it for not spelling out for the ‘content harmful to adults’ duty how “huge volumes of misogyny, racism, antisemitism etc – that are not criminal but are oppressive and harmful – will be addressed”.

Even a wholly systemic duty of care has, at some level and at some point – unless everything done pursuant to the duty is to apply indiscriminately to all kinds of content - to become focused on which kinds of user content are and are not considered to be harmful by reason of their informational content, and to what degree.

To take one example, Carnegie discusses repeat delivery of self-harm content due to personalisation systems. If repeat delivery per se constitutes the risky activity, then inhibition of that activity should be applied in the same way to all kinds of content. If repeat delivery is to be inhibited only, or differently, for particular kinds of content, then the duty additionally becomes focused on categories of content. There is no escape from this dichotomy.

It is possible to conceive of a systemic safety duty expressed in such general terms that it would sweep up anything in the system that might be considered capable of causing harm (albeit - unless limited to risk of physical injury - it would still inevitably struggle, as does the draft Bill, with the subjective nature of harms said to be caused by informational content). A systemic duty would relate to systems and processes that for whatever reason are to be treated as intrinsically risky.

The question that then arises is what activities are to be regarded as inherently risky. It is one thing to argue that, for instance, some algorithmic systems may create risks of various kinds. It is quite another to suggest that that is true of any kind of U2U platform, even a simple discussion forum. If the underlying assumption of a systemic duty of care is that providing a facility in which individuals can speak to the world is an inherently risky activity, that (it might be thought) upends the presumption in favour of speech embodied in the fundamental right of freedom of expression.

The draft Bill – content-related or not?

To what extent are the draft Bill’s duties content-related, and to what extent systemic?

Most of the draft Bill’s duties are explicitly content-related. They mean to cover online user content that is illegal or harmful to adults or children. To the extent that, for instance, the effect of algorithms on the likelihood of encountering content has to be considered, that is in relation to those kinds of content.

For content-related duties the draft Bill draws no obvious distinction between informational and non-informational causes of harm. So risk of physical injury as a result of reading anti-vax content is treated indistinguishably from risk of an epileptic seizure as a result of seeing flashing images.

The most likely candidates in the draft Bill for content-agnostic or non-content duties are Sections 9(2) and 10(2)(a). For illegal content S.9(2) requires the service provider to “take proportionate steps to mitigate and effectively manage the risks of harm to individuals”, identified in the service provider’s most recent S.7(8) illegal content risk assessment. S.10(2) contains a similar duty in relation to harm to children in different age groups, based on the most recent S.7(9) children’s risk assessment.

Although the S.7 risk assessments are about illegal content and content harmful to children, neither of the 9(2) and 10(2)(a) safety duties is expressly limited to harm arising from those kinds of content.

Possibly, those duties are intended to relate back to Sections 7(8)(e) and 7(9)(e) respectively. Those require risk assessments of the “different ways in which the service is used, and the impact that has on the level of risk of harm that might be suffered” by individuals or children respectively – again without expressly referring to the kinds of content that constitute the subject-matter of Sections 7(8) and 7(9).  However, to deduce a pair of wholly content-agnostic duties in Sections 9(2) and 10(2)(a) would seem to require those S.7 risk assessment factors to be considered independently of their respective contexts.

Whatever may be the scope of S.9(2) and 10(2)(a), the vast majority of the draft Bill’s safety duties are drafted expressly by reference to in-scope illegal or legal but harmful content. Thus, for example, the government notes at para [34] of its evidence:

“User-to-user services will be required to operate their services using proportionate systems and processes to minimise the presence, duration and spread of illegal content and to remove it swiftly once they are aware of it.” (emphasis added)

As would be expected, those required systems and processes are framed by reference to a particular type of user content. The same is true for duties that apply to legal content defined as harmful to adults or children.

The Impact Assessment accompanying the draft Bill states:

“…it is expected that undertaking additional content moderation (through hiring additional content moderators or using automated moderation) will represent the largest compliance cost faced by in-scope businesses.” (Impact Assessment [166])

That compliance cost is estimated at £1.7 billion over 10 years. That does not suggest a regime that is not focused on content.

Individual user content

The contrast drawn by the government is between systems and processes on the one hand, and “individual” pieces of content on the other.

The draft Bill defines harm as physical or psychological harm. What could result in such harm? The answer, online, can only be individual user content: that which, whether alone or in combination, singly or repeated, we say and see online. Various factors may influence, to differing extents, what results in which user content being seen by whom: user choices such as joining discussion forums and channels, choosing topics, following each other, rating each other’s posts and so on, or platform-operated recommendation and promotion feeds. But none of that detracts from the fact that it is what is posted – items of user content – that results in any impact.

The decisions that service providers would have to make – whether automated, manual or a combination of both – when attempting to implement content-related safety duties, inevitably concern individual items of user content. The fact that those decisions may be taken at scale, or are the result of implementing systems and processes, does not change that.

For every item of user content putatively subject to a filtering, take-down or other kind of decision, the question for a service provider seeking to discharge its safety duties is always what (if anything) should be done with this item of content in this context? That is true regardless of whether those decisions are taken for one item of content, a thousand, or a million; and regardless of whether, when considering a service provider’s regulatory compliance, Ofcom is focused on evaluating the adequacy of its systems and processes rather than with punishing service providers for individual content decision failures.

A platform duty of care has been likened to an obligation to prevent risk of injury from a protruding nail in a floorboard. The analogy is flawed, but even taking that analogy at face value the draft Bill casts service providers in the role of hammer, not nail. The dangerous nail is users’ speech. Service providers are the tool chosen to hammer it into place. Ofcom directs the use of the tool. Whether an individual strike of the hammer may or may not attract regulatory sanction is a matter of little consequence to the nail.

Even if Ofcom would not be involved in making individual content decisions, it is difficult to see how it could avoid at some point evaluating individual items of content. Thus the provisions for use of technology notices require the “prevalence” of CSEA and/or terrorism content to be assessed before serving a notice. That inevitably requires Ofcom to assess whether material present on the service does or does not fall within those defined categories of illegality.

More broadly, it is difficult to see how Ofcom could evaluate for compliance purposes the proportionality and effectiveness of filtering, monitoring, takedown and other systems and processes without considering whether the user content affected does or does not qualify as illegal or harmful content. That would again require a concrete assessment of at least some actual items of user content.

It is not immediately obvious why the government has set so much store by the claimed systemic nature of the safety duties. Perhaps it thinks that by seeking to distance Ofcom from individual content decisions it can avoid accusations of state censorship. If so, that ignores the fact that service providers, via their safety duties, are proxies for the regulator. The effect of the legislation on individual items of user content is no less concrete because service providers are required to make decisions under the supervision of Ofcom, rather than if Ofcom were wielding the blue pencil, the muffler or the content warning generator itself.