Wednesday 3 November 2021

The draft Online Safety Bill concretised

A.    Introduction 

1.       The draft Online Safety Bill is nothing if not abstract. Whether it is defining the adult (or child) of ordinary sensibilities, mandating proportionate systems and processes, or balancing safety, privacy, and freedom of speech within the law, the draft Bill resolutely eschews specifics.  

2.      The detailing of the draft Bill’s preliminary design is to be executed in due course by secondary legislation, with Ofcom guidance and Codes of Practice to follow. Even at that point, there is no guarantee that the outcome would be clear rules that would enable a user to determine on which side of the safety line any given item of content might fall.

3.      Notwithstanding its abstract framing, the impact of the draft Bill (should it become law) would be on individual items of content posted by users. But how can we evaluate that impact where legislation is calculatedly abstract, and before any of the detail is painted in?

4.      We have to concretise the draft Bill’s abstractions: test them against a hypothetical scenario and deduce (if we can) what might result. This post is an attempt to do that.

B.    A concrete hypothetical

Our scenario concerns an amateur blogger who specialises in commenting on the affairs of his local authority. He writes a series of blogposts (which he also posts to his social media accounts) critical of a senior officer of the local authority, who has previously made public a history of struggling with mental health issues. The officer says that the posts have had an impact on her mental health and that she has sought counselling.

5.      This hypothetical scenario is adapted from the Sandwell Skidder case, in which a council officer brought civil proceedings for harassment under the Protection from Harassment Act 1997 against a local blogger, a self-proclaimed “citizen journalist”.

6.      The court described the posts in that case, although not factually untrue, as a “series of unpleasant, personally critical publications”. It emphasised that nothing in the judgment should be taken as holding that the criticisms were justified. Nevertheless, and not doubting what the council officer said about the impact on her, in a judgment running to 92 paragraphs the court held that the proceedings for harassment stood no reasonable prospect of success and granted the blogger summary judgment.

7.       In several respects the facts and legal analysis in the Sandwell Skidder judgment carry resonance for the duties that the draft Bill would impose on a user to user (U2U) service provider:

a.       The claim of impact on mental health.

b.      The significance of context (including the seniority of the council officer, the council officer’s own previous video describing her struggle with mental health issues; and the legal requirement for there to have been more than a single post by the defendant).

c.       The defendant being an amateur blogger rather than a professional journalist (the court held that the journalistic nature of the blog was what mattered, not the status of the person who wrote it).

d.      The legal requirement that liability for harassment should be interpreted by reference to Art 10 ECHR.

e.       The significance for the freedom of expression analysis of the case being one of publication to the world at large.

f.        The relevance that similar considerations would have to the criminal offence of harassment under the 1997 Act.

8.      Our hypothetical potentially requires consideration of service provider safety duties for illegality and (for a Category 1 service provider) content harmful to adults. (Category 1 service providers would be designated on the basis of being high risk by reason of size and functionality.)

9.      The scenario would also engage service provider duties in respect of some or all of freedom of expression, privacy, and (for a Category 1 service provider) journalistic content and content of democratic importance.

10.   We will assume, for simplicity, that the service provider in question does not have to comply with the draft Bill’s “content harmful to children” safety duty.

C.     The safety duties in summary

11.    The draft Bill’s illegality safety duties are of two kinds: proactive/preventative and reactive.

12.   The general proactive/preventative safety duties under S.9(3)(a) to (c) apply to priority illegal content designated as such by secondary legislation. Although these duties do not expressly stipulate monitoring and filtering, preventative systems and processes are to some extent implicit in e.g. the duty to ‘minimise the presence of priority illegal content’.

13.   It is noteworthy, however, that an Ofcom enforcement decision cannot require steps to be taken “to use technology to identify a particular kind of content present on the service with a view to taking down such content” (S.83(11)).

14.   Our hypothetical will assume that criminally harassing content has been designated as priority illegal content.

15.   The only explicitly reactive duty is under S.9(3)(d), which applies to all in-scope illegal content. The duty sits alongside the hosting protection in the eCommerce Directive, but cast as a positive obligation to remove in-scope illegal content upon gaining awareness of the presence of illegal content, rather than (as in the eCommerce Directive) exposing the provider to potential liability under the relevant substantive law. The knowledge threshold appears to be lower than that in the eCommerce Directive.

16.   There is also a duty under S.9(2), applicable to all in-scope illegality, to take “proportionate steps to mitigate and effectively manage” risks of physical and psychological harm to individuals. This is tied in some degree to the illegal content risk assessment that a service provider is required to carry out. For simplicity, we shall consider only the proactive and reactive illegality safety duties under S.9(3).

17.   Illegality refers to certain types of criminal offence set out in the draft Bill. They would include the harassment offence under the 1997 Act.

18.   The illegality safety duties apply to user content that the service provider has reasonable grounds to believe is illegal, even though it may not in fact be illegal. As the government has said in its Response to the House of Lords Communications and Digital Committee Report on Freedom of Expression in the Digital Age:

Platforms will need to take action where they have reasonable grounds to believe that content amounts to a relevant offence. They will need to ensure their content moderation systems are able to decide whether something meets that test.”

19.   That, under the draft Bill’s definition of illegal content, applies not only to content actually present on the provider’s service, but to kinds of content that may hypothetically be present on its service in the future.

20.  That would draw the service provider into some degree of predictive policing. It also raises questions about the level of generality at which the draft Bill would require predictions to be made and how those should translate into individual decisions about concrete items of content.

21.   For example, would a complaint by a known person about a known content source that passed the ‘reasonable grounds’ threshold concretise the duty to minimise the presence of priority illegal content? Would that require the source of the content, or content about the complainant, to be specifically targeted by minimisation measures? This has similarities to the long running debate about ‘stay-down’ obligations on service providers.

22.  The question of the required level of generality or granularity, which also arises in relation to the ‘content harmful to adults’ duty, necessitates close examination of the provisions defining the safety duties and the risk assessment duties upon which some aspects of the safety duties rest. It may be that there is not meant to be one answer to the question; that it all comes down to proportionality, Ofcom guidance and Codes of Practice.  However, even taking that into account, some aspects remain difficult to fit together satisfactorily. If there is an obvious solution to those, no doubt someone will point me to it.

23.  The “content harmful to adults” safety duty requires a Category 1 service provider to make clear in its terms and conditions how such content would be dealt with and to apply those terms and conditions consistently. There is a question, on the wording of the draft Bill, as to whether a service provider can state that ‘we do nothing about this kind of harmful content’. The government’s position is understood to be that that would be permissible.

24.  The government’s recent Response to the Lords Digital and Communications Committee Report on Freedom of Expression in the Digital Age says:

“Where harmful misinformation and disinformation does not cross the criminal threshold, the biggest platforms (Category 1 services) will be required to set out what is and is not acceptable on their services, and enforce the rules consistently. If platforms choose to allow harmful content to be shared on their services, they should consider other steps to mitigate the risk of harm to users, such as not amplifying such content through recommendation algorithms or applying labels warning users about the potential harm.”

25.  If the government means that considering those “other steps” forms part of the Category 1 service provider’s duty, it is not obvious from where in the draft Bill that might stem.

26.  In fulfilling any kind of safety duty under the draft Bill a service provider would be required to have regard to the importance of protecting users’ right to freedom of expression within the law. Similarly it has to have regard to the importance of protecting users from unwarranted infringements of privacy. (Parenthetically, in the Sandwell Skidder case privacy was held not to be a significant factor in view of the council officer’s own previous published video.)

27.  Category 1 providers would be under further duties to take into account the importance of journalistic content and content of democratic importance when making decisions about how to treat such content and whether to take action against a user generating, uploading or sharing such content.

D.    Implementing the illegality safety duties

Proactive illegality duties: S.9(3)(a) to (c)

28.  We have assumed that secondary legislation has designated criminally harassing content as priority illegal content. The provider has to have systems and processes designed to minimise the presence of priority illegal content, the length of time for which it is present, and the dissemination of such content. Those systems could be automated, manual or both.

29.  That general requirement has to be translated into an actual system or process making actual decisions about actual content. The system would (presumably) have to try to predict the variety of forms of harassment that might hypothetically be present in the future, and detect and identify those that pass the illegality threshold (reasonable grounds to believe that the content is criminally harassing).

30.  Simultaneously it would have to try to avoid false positives that would result in the suppression of user content falling short of that threshold. That would seem to follow from the service provider’s duty to have regard to the importance of protecting users’ right to freedom of expression within the law. For Category 1 service providers that may be reinforced by the journalistic content and content of democratic importance duties. On the basis of the Sandwell Skidder judgment our hypothetical blog should qualify at least as journalistic content.

31.   What would that involve in concrete terms? First, the system or process would have to understand what does and does not constitute a criminal offence. That would apply at least to human moderators. Automated systems might be expected to do likewise. The S.9(3) duty makes no distinction (albeit there appears to be tension between the proactive provisions of S.9(3) and the limitation on Ofcom’s enforcement power in S.83(11) (para 13 above)).

32.  Parenthetically, where harassment is concerned not only the offence under the 1997 Act might have to be understood. Hypothetical content could also have to be considered under any other potentially applicable offences - the S.127 Communications Act offences, say (or their possible replacement by a ‘psychological harm’ offence as recommended by the Law Commission); and the common law offence of public nuisance or its statutory replacement under the Policing Bill currently going through Parliament.

33.  It is worth considering, by reference to some extracts from the caselaw, what understanding the 1997 Act harassment offence might involve:

  •           There is no statutory definition of harassment. It “was left deliberately wide and open-ended” (Majrowski v Guy’s and Thomas’s NHS Trust [2006] ICR 1999)

  •           The conduct must cross “the boundary from the regrettable to the unacceptable” (ibid)

  •           “… courts will have in mind that irritations, annoyances, even a measure of upset, arise at times in everybody’s day-to-day dealings with other people. Courts are well able to recognise the boundary between conduct which is unattractive, even unreasonable, and conduct which is oppressive and unacceptable” (ibid)

  •           Reference in the Act to alarming the person or causing distress is not a definition; it is merely guidance as to one element. (Hayes v Willoughby [2013] 1 WLR 935).

  •           “It would be a serious interference with freedom of expression if those wishing to express their own views could be silenced by, or threatened with, claims for harassment based on subjective claims by individuals that they feel offended or insulted.” (Trimingham v Associated Newspapers Ltd [2012] EWHC 1296)

  •           “When Article 10 [ECHR] is engaged then the Court must apply an intense focus on the relevant competing rights… . Harassment by speech cases are usually highly fact- and context-specific.” (Canada Goose v Persons Unknown [2019] EWHC 2459)

  •           “The real question is whether the conduct complained of has extra elements of oppression, persistence and unpleasantness and therefore crosses the line… . There may be a further question, which is whether the content of statements can be distinguished from their mode of delivery.” (Merlin Entertainments v Cave [2014] EWHC 3036)

  •           “[P]ublication to the world at large engages the core of the right to freedom of expression. … In the social media context it can be more difficult to distinguish between speech which is “targeted” at an individual and speech that is published to the world at large.” (McNally v Saunders [2021] EWHC 2012)

34.  Harassment under the 1997 Act is thus a highly nuanced concept - less of a bright line rule that can be translated into an algorithm and more of an exercise in balancing different rights and interests against background factual context – something that even the courts do not find easy.

35.  For the harassment offence the task of identifying criminal content on a U2U service is complicated by the central importance of context and repetition. The potential relevance of external context is illustrated by the claimant’s prior published video in the Sandwell Skidder case. A service provider’s systems are unlikely to be aware of relevant external context.

36.  As to repetition, initially lawful conduct may become unlawful as the result of the manner in which it is pursued and its persistence. That is because the harassment offence requires, in the case of conduct in relation to a single person, conduct on at least two occasions in relation to that person. That is a bright line rule. One occasion is not enough.

37.   It would seem, therefore, to be logically impossible for a proactive moderation system to detect a single post and validly determine that it amounts to criminal harassment, or even that there are reasonable grounds to believe that it does. The system would have to have detected and considered together, or perhaps inferred the existence of, more than one harassing post.

38.  The court in the Sandwell Skidder case devoted 92 paragraphs of judgment to describing the facts, the law, and weighing up whether the Sandwell Skidder’s posts amounted to harassment under the 1997 Act. That luxury would not be available to the proactive detection and moderation systems apparently envisaged by the draft Bill, at least to the extent that - unlike a court - they would have to operate at scale and in real or near-real time.

Reactive illegality duty: S.9(3)(d)

39.  The reactive duty on the service provider under S.9(3)(d) is to have proportionate systems and processes in place designed to: “where [it] is alerted by a person to the presence of any illegal content, or becomes aware of it in any other way, swiftly take down such content”.

40.  Let us assume that the service provider’s proactive illegality systems and processes have not already suppressed references to our citizen journalist’s blogposts. Suppose that, instead of taking a harassment complaint to court, the subject of the blogposts complains to the service provider. What happens then?

41.   In terms of knowledge and understanding of the law of criminal harassment, nothing differs from the proactive duties. From a factual perspective, the complainant may well have provided the service provider with more context as seen from the complainant’s perspective.

42.  As with the proactive duties, the threshold that triggers the reactive takedown duty is not awareness that the content is actually illegal. If there are reasonable grounds to believe that use or dissemination of the content amounts to a relevant criminal offence, the service provider is positively obliged to have a system or process designed to take it down swiftly.

43.  At the same time, however, it is required to have regard to the importance of freedom of expression within the law (and, if a Category 1 service provider, to take into account the importance of journalistic content and content of democratic importance).

44.  Apart from the reduced threshold for illegality the exercise demanded of a service provider at this point is essentially that of a court. The fact that the service provider might not be sanctioned by the regulator for coming to an individual decision which the regulator did not agree with (see here) does not detract from the essentially judicial role that the draft Bill would impose on the service provider. 

E.     Implementing the ‘content harmful to adults’ safety duty

45.  Category 1 services would be under a safety duty in respect of ‘content harmful to adults’.

46.  What is ‘content harmful to adults’? It comes in two versions: priority and non-priority. The Secretary of State is able (under a peculiar regulation-making power that on the face of it is not limited to physical or psychological harm) to designate harassing content (whether or not illegal) as priority content harmful to adults.

47.  Content is non-priority content harmful to adults if its nature is such that “there is a material risk of the content having, or indirectly having, a significant adverse physical or psychological impact on an adult of ordinary sensibilities”.  A series of sub-definitions drills down to characteristics and sensibilities of groups of people, and then to those of known individuals. Non-priority content harmful to adults cannot also be illegal content (S.46(8)(a)).

48.  Whether the content be priority or non-priority, the Category 1 service provider has to explain clearly and accessibly in its terms and conditions how it would deal with actual content of that kind; and then apply those terms and conditions consistently (S.11).

49.  As already mentioned (para 23), the extent of the ‘content harmful to adults’ duty is debatable. ‘How’ could imply that such content should be dealt with in some way. The government’s intention is understood to be that the duty is transparency-only, so that the service provider is free to state in its terms and conditions that it does nothing about such content.

50.  Even on that basis, the practical question arises of how general or specific the descriptions of harmful content in the terms and conditions have to be. Priority content could probably be addressed at the generic level of kinds of priority content designated in secondary legislation. Whether our hypothetical blogpost would fall within any of those categories would depend on how harassing content had been described in secondary legislation – for instance, whether a course of conduct was stipulated, as with the criminal offence.

51.   The question of level of generality is much less easy to answer for non-priority content. For instance, the element of the ‘non-priority content harmful to adults’ definition that concerns known adults appears to have no discernible function in the draft Bill unless it in some way affects the Category 1 service provider’s ‘terms and conditions’ duty. Yet if it does have an effect of that kind, it is difficult to see what that could be intended to be.

52.  The fictional character of the “adult of ordinary sensibilities” (see here for a detailed discussion of this concept and its antecedents) sets out initially to define an objective standard for adverse psychological impact (albeit the sub-definitions progressively move away from that). An objective standard aims to address the problem of someone subjectively claiming to have suffered harm from reading or viewing material. That carries the risk of embedding the sensitivities of the most easily offended reader.

53.  For non-priority content harmful to adults, the S.11 duty kicks in if harassing content has been identified as a risk in the “adult’s risk assessment” that a Category 1 service provider is required to undertake. As with illegal content, content harmful to adults includes content hypothetically present on the system.

54.  This relationship creates the conundrum that the higher the level of abstraction at which the adults’ risk assessment is conducted, the greater the gap that has to bridged when translating to actual content; alternatively, if risk assessment is conducted at a more granular and concrete level, for instance down to known content sources and known individuals who are the subject of online content, it could rapidly multiply into unfeasibility.

55.   So, what happens if the Category 1 service provider is aware of a specific blog, or of specific content contained in a blog, or of a specific person who is the subject of posts in the blog, that has been posted to its service? Would that affect how it had to fulfil its duties in relation to content harmful to adults?

56.  Take first a known blog and consider the service provider’s transparency duty. Does the service provider have to explain in its terms and conditions how content from individually identified user sources is to be dealt with? On the face of it that would appear to be a strange result. However, the transparency duty and its underlying risk assessment duty are framed by means of an uneasy combination of references to ‘kind’ of content and ‘content’, which leaves the intended levels of generality or granularity difficult to discern.

57.   The obvious response to this kind of issue may be that a service provider is required only to put in place proportionate systems and processes. That, however, provides no clear answer to the concrete question that the service provider would face: do I have to name any specific content sources in my terms and conditions and explain how they will be dealt with; if so, how do I decide which?

58.  Turning now to a known subject of a blog, unlike for known content sources the draft Bill contains some specific, potentially relevant, provisions. It expressly provides that where the service provider knows of a particular adult who is the subject of user content on its service, or to whom it knows that such content is directed, it is that adult’s sensibilities and characteristics that are relevant. The legal fiction of the objective adult of ordinary sensibilities is replaced by the actual subject of the blogpost.

59.  So in the case of our hypothetical blog, once the council officer complains to the service provider, the service provider knows of the complainant’s identity and also, crucially, knows of the assertion that they have suffered psychological harm as a result of the content on their service.

60.  The service provider’s duty is triggered not by establishing actual psychological harm, but by reasonable grounds to believe that there is a material risk of the content having a significant adverse physical or psychological impact. Let us assume that the service provider has concluded that its ‘harmful to adults’ duty is at least arguably triggered. What does the service provider have to do?

61.   As with a known blog or blogpost, focusing the duty to the level of a known person raises the question: does the service provider have to state in its terms and conditions how posts about, or directed at, that named person will be dealt with? Does it have to incorporate a list of such known persons in its terms and conditions? It is hard to believe that that is the government’s intention. Yet combining the Category 1 safety duty under S.11(2)(b) with the individualised version of the 'adult of ordinary sensibilities' appears to lean in that direction.

62.  If that is not the consequence, and if the Category 1 duty in relation to content harmful to adults is ‘transparency-only’, then how (if at all) would the ‘known person’ provision of the draft Bill affect what the service provider is required to do? What function does it perform? If the ‘known person’ provision does have some kind of substantive consequence, what might that be? That may raise the question whether someone who claims to be at risk of significant adverse psychological impact from the activities of a blogger could exercise some degree of personal veto or some other kind of control over dissemination of the posts.

63.  Whatever the answer may be to the difficult questions that the draft Bill poses, what it evidently does do is propel service providers into a more central role in determining controversies: all in scope service providers where a decision has to be made as to whether there are reasonable grounds to believe that the content is illegal, or presents a material risk of serious adverse psychological impact on an under-18; and Category 1 service providers additionally for content harmful to adults.  



No comments:

Post a Comment

Note: only a member of this blog may post a comment.