Sunday 16 May 2021

Harm Version 3.0: the draft Online Safety Bill

Two years on from the April 2019 Online Harms White Paper, the government has published its draft Online Safety Bill. It is a hefty beast: 133 pages and 141 sections. It raises a slew of questions, not least around press and journalistic material and the newly-coined “content of democratic importance”. Also, for the first time, the draft Bill spells out how the duty of care regime would apply to search engines, not just to user generated content sharing service providers.

This post offers first impressions of a central issue that started to take final shape in the government’s December 2020 Full Response to consultation: the apparent conflict between imposing content monitoring and removal obligations on the one hand, and the government’s oft-repeated commitment to freedom of expression on the other - now translated into express duties on service providers.

That issue overlaps with a question that has dogged the Online Harms project from the outset: what does it mean by safety and harm?  The answer shapes the potential impact of the legislation on freedom of expression. The broader and vaguer the notion of harm, the greater the subjectivity involved in complying with the duty of care, and the greater the consequent dangers for online users' legitimate speech. 

The draft Bill represents the government's third attempt at defining harm (if we include the White Paper, which set no limit). The scope of harm proposed in its second version (the Full Response) has now been significantly widened

For legal but harmful content the government apparently now means to set an overall backstop limitation of "physical or psychological harm", but whether the draft Bill achieves that is doubtful. In any event that would still be broader than the general definition of harm proposed in the Full Response: a “reasonably foreseeable risk of a significant adverse physical or psychological impact on individuals”. For illegal content the Full Response's general definition would not apply; and the new backstop definition would have only limited relevance.   

Moderation and filtering duties

If one provision can be said to lie at the heart of the draft Bill, it is section 9(3). This describes duties that will apply to all the estimated 24,000 in-scope service providers. It is notable that pre-Brexit, duties (a) to (c) would have fallen foul of the ECommerce Directive's Article 15 ban on imposing general monitoring obligations on hosting providers. Section 9(3) thus departs from 20 years of EU and UK policy aimed at protecting the freedom of expression and privacy of online users.  

Section 9(3) imposes: 

A duty to operate a service using proportionate systems and processes designed to—

(a) minimise the presence of priority illegal content;

(b) minimise the length of time for which priority illegal content is present;

(c) minimise the dissemination of priority illegal content;

(d) where the provider is alerted by a person to the presence of any illegal content, or becomes aware of it in any other way, swiftly take down such content.

Duty (d) approximately parallels the hosting liability shield in the ECommerce Directive, but cast in terms of a positive regulatory obligation to operate take down processes, rather than potential exposure to liability for a user's content should the shield be disapplied on gaining knowledge of its illegality. 

As is typical of regulatory legislation, the draft Bill is not a finished work. It is more like a preliminary drawing intended to be filled out later. For instance, the extent of the proactive moderation and filtering obligations implicit in Section 9(3) depends on what constitutes ‘priority illegal content’. That is not set out in the draft Bill, but would be designated in secondary legislation prepared by the Secretary of State. The same holds for ‘priority content that is harmful to adults’, and for a parallel category relating to children, which underpin other duties in the draft Bill. 

Since Section 9(3) and the other duties make no sense without the various kinds of priority content first being designated, regulations would presumably have to be made before the legislation can come into force.  The breadth of the Secretary of State's discretion in designating priority content is discussed below.

If secondary legislation is a layer of detail applied to the preliminary drawing a further layer, yet more detailed, will consist of codes of practice, guidance and risk profiles for different kinds of service, all issued by Ofcom.

This regulatory vessel would be pointed in the government's desired direction by a statement of strategic online safety priorities issued by the Secretary of State, to which Ofcom would be required to have regard. The statement could set out particular outcomes. The Secretary of State would first have to consult with Ofcom, then lay the draft before Parliament so as to give either House the opportunity to veto it.  

The moderation and filtering obligations implicit in Section 9(3) and elsewhere in the draft Bill would take the lion’s share – £1.7bn  of the £2.1bn that the government’s Impact Assessment reckons in-scope providers will have to spend on complying with the legislation over the first 10 years. Moderation is expected to be both technical and human:

“…it is expected that undertaking additional content moderation (through hiring additional content moderators or using automated moderation) will represent the largest compliance cost faced by in-scope businesses.” (Impact Assessment [166])

Additional moderation costs are expected to be incurred in greater proportion by the largest (Category 1) providers: 7.5% of revenue for Category 1 organisations and 1.9% for all other in-scope organisations (Impact Assessment [180]). That presumably reflects the obligations specific to Category 1 providers in relation to legal but 'harmful to adults’ content.

Collateral damage to legitimate speech

Imposition of moderation and filtering obligations, especially at scale, raises the twin spectres of interference with users’ privacy and collateral damage to legitimate speech. The danger to legitimate speech arises from misidentification of illegal or harmful content and lack of clarity about what is illegal or harmful. Incidence of collateral damage resulting from imposition of such duties is likely to be affected by:

  •     The proof threshold that triggers the duty. The lower the standard to which a service provider has to be satisfied of illegality or harm, the greater the likelihood of erroneous removal or inhibition.
  •     Scale. The greater the scale at which moderation or filtering is carried out, the less feasible it is to take account of individual context. Even for illegal content the assessment of illegality will, for many kinds of illegality, be context-sensitive.
  •     Subjectivity of the harm. If harm depends upon the subjective perception of the reader, a harm standard according to the most easily offended reader may develop.
  •     Vagueness. If the kind of harm is so vaguely defined that no sensible line can be drawn between identification and misidentification, then collateral damage is hard-wired into the regime.
  •     Scope of harm The broader the scope of harm to which a duty applies, the more likely that it will include subjective or vague harms.

Against these criteria, how does the draft Bill score on the collateral damage index?

Proof threshold S.41 defines illegal content as content where the service provider has reasonable grounds to believe that use or dissemination of the content amounts to a relevant criminal offence. Illegal content does not have to be definitely illegal in order for the section 9(3) duties to apply.

The scale of the required moderation and filtering is apparent from the Impact Assessment.

The scope of harm has swung back and forth. Version 1.0 was contained in the government’s April 2019 White Paper. It encompassed the vaguest and most subjective kinds of harm. Most kinds of illegality were within scope. For harmful but legal content there was no limiting definition of harm. Effectively, the proposed regulator (now Ofcom) could have deemed what is and is not harmful.

By the time of its Full Consultation Response in December 2020 the government had come round to the idea that the proposed duty of care should relate only to defined kinds of harm, whether they arose from illegal user content or user content that was legal but harmful [Full Response 2.24].

In this Version 2.0, harm would consist of a “reasonably foreseeable risk of a significant adverse physical or psychological impact on individuals”. A criminal offence would therefore be in scope of a provider’s duty of care only if the offence presented that kind of risk.

Although still problematic in retaining some elements of subjectivity through inclusion of psychological impact, the Full Response proposal significantly shifted the focus of the duty of care towards personal safety properly so-called. It was thus more closely aligned to the subject matter of comparable offline duties of care.

Version 3.0 The draft Bill states as a general definition that "harm" means "physical or psychological harm".  This is an attenuated version of the general definition proposed in the Full Response. However, the draft Bill does not stipulate that 'harmful' should be understood in the same limited way. The result of that omission, combined with other definitions, could be to give the Secretary of State regulation-making powers for legal but harmful content that are, on the face of them, not limited to physical or psychological harm. 

This may well not be the government's intention. When giving evidence to the Culture Media and Sport Commons Committee last week, Secretary of State Oliver Dowden stressed (14:57 onwards) that regulations would not go beyond physical or psychological harm. This could usefully be explored during pre-legislative scrutiny.  

For legal but harmful content the draft Bill does provide a more developed version of the Full Response’s general definition of harm, tied to impact on a hypothetical adult or child "of ordinary sensibilities". This is evidently an attempt to inject some objectivity into the assessment of harm. It then adds further layers addressing impact on members of particularly affected groups or particularly affected people with certain characteristics (neither specified), impact on a specific person about whom the service provider knows, and indirect impact. These provisions will undoubtedly attract close scrutiny.  

In any event, this complex definition of harm does not have universal application within the draft Bill. It governs only a residual category of content outside the Secretary of State’s designated descriptions of priority harmful content for adults and children. The longer the Secretary of State's lists of priority harmful content designated in secondary legislation, the less ground in principle would be covered by content to which the complex definition applies. The Secretary of State is not constrained by the complex definition when designating priority harmful content. 

Nor, on the face of it, is the Secretary of State limited to physical or psychological harm. However, as already flagged, that may well not represent the intention of the government. That omission would be all the more curious, given that Ofcom has a consultation and recommendation role in the regulation-making process, and the simple definition – physical or psychological harm - does constrain Ofcom’s recommendation remit. 

The Secretary of State has a parallel power to designate priority illegal content (which underpins the section 9(3) duties above) by secondary legislation. He cannot include offences relating to:

-        Infringement of intellectual property rights

-        Safety or quality of goods (as opposed to what kind of goods they are)

-        Performance of a service by a person not qualified to perform it

In considering whether to designate an offence the Secretary of State does have to take into account, among other things, the level of risk of harm being caused to individuals in the UK by the presence of content that amounts to the offence, and the severity of that harm. Harm here does mean physical or psychological harm.  

As with harmful content, illegal content includes a residual category designed to catch illegality neither specifically identified in the draft Bill (terrorism and CSEA offences) nor designated in secondary legislation as priority illegal content. This category consists of “Other offences of which the victim or intended victim is an individual (or individuals).” This, while confined to individuals, is not limited to physical or psychological harm. 

The first round of secondary legislation designating categories of priority illegal and harmful content would require affirmative resolutions of each House of Parliament. Subsequent regulations would be subject to negative resolution of either House.

To the extent that the government’s rowback from the general definition of harm contained in the Full Response enables more vague and subjective kinds of harm to be brought back into scope of service provider duties, the risk of collateral damage to legitimate speech would correspondingly increase.

Internal contradictions

The draft Bill lays down various risk assessments that in-scope providers must undertake, taking into account a ‘risk profile’ of that kind of service prepared by Ofcom and to be included in its guidance about risk assessments.

As well as the Section 9(3) moderation and filtering duties set out above, for illegal content a service provider would be under a duty to take proportionate steps to mitigate and effectively manage the risks of harm to individuals, as identified in the service’s most recent illegal content risk assessment.

In parallel to these duties, the service provider is placed under a duty to have regard to the importance of protecting users’ right to freedom of expression within the law when deciding on, and implementing, safety policies and procedures.

However, since the very duties imposed by the draft Bill create a risk of collateral damage to legitimate speech, a conflict between duties is inevitable. The potential for conflict increases with the scope of the duties and the breadth and subjectivity of their subject matter. 

The government has acknowledged the risk of collateral damage in the context of Category 1 services, which would be subject to duties in relation to lawful content harmful to adults in addition to the duties applicable to ordinary providers.

Category 1 service providers would have to prepare assessments of their impact on freedom of expression and (as interpreted by the government's launch announcement) demonstrate that they have taken steps to mitigate any adverse effects. The government commented:

“These measures remove the risk that online companies adopt restrictive measures or over-remove content in their efforts to meet their new online safety duties. An example of this could be AI moderation technologies falsely flagging innocuous content as harmful, such as satire.” (emphasis added)

This passage acknowledges the danger inherent in the legislation: that efforts to comply with the duties imposed by the legislation would carry a risk of collateral damage by over-removal.  That is true not only of ‘legal but harmful’ duties, but also of the moderation and filtering duties in relation to illegal content that would be imposed on all providers.

No obligation to conduct a freedom of expression risk assessment could remove the risk of collateral damage by over-removal. That smacks of faith in the existence of a tech magic wand. Moreover, it does not reflect the uncertainty and subjective judgement inherent in evaluating user content, however great the resources thrown at it. 

Internal conflicts between duties, underpinned by the Version 3.0 approach to the notion of harm, sit at the heart of the draft Bill. For that reason, despite the government’s protestations to the contrary, the draft Bill will inevitably continue to attract criticism as - to use the Secretary of State's words -  a censor’s charter.