Showing posts with label Censorship. Show all posts
Showing posts with label Censorship. Show all posts

Sunday, 13 July 2025

The Ordinary Reasonable Person encounters (or not) cyber-abuse

The recent decision of the Australian Administrative Review Tribunal in X Corp and Elston v eSafety Commissioner illustrates the complexities that can arise when the law tasks a regulator or platform to adjudge an online post.

The decision grapples with a dilemma that is familiar, albeit under a very different legislative regime, from the UK’s Online Safety Act 2023. It is also features in the police takedown notice scheme for unlawful knives and other weapons content contained in the Crime and Policing Bill (currently making its way through Parliament).

At a high level, the issue is how to achieve rapid removal of impugned user content (typically because it is illegal under the general law or defined as harmful in some way), while not affecting legitimate posts. The specific challenge is that the contents of the post alone are often insufficient to determine whether the legal line has been crossed. Contextual information, which may be off-platform and involve investigation, is required. The Elston case provides a vivid illustration.

The twin imperatives of rapid removal and adequate investigation of context stand in conflict with each other. A regime that requires contravention to be adjudged solely on the contents of a post, ignoring external context, is likely to be either ineffectual or overreaching, depending on which way the adjudicator is required to jump in the absence of relevant information.

Australia’s Online Safety Act 2021 empowers the eSafety Commissioner, but only following receipt of a complaint, to issue a content removal notice to a social media platform if she is satisfied that a user’s post constitutes cyber-abuse material targeted at an Australian adult. (In this respect the Australian legislation resembles the UK Crime and Policing Bill more than our Online Safety Act: Ofcom has no power under the OSA to require removal of a specific item of user content. The Crime and Policing Bill will institute a regime of police takedown notices for unlawful knives and other weapons content, albeit not predicated on receipt of a complaint.)

Cyber-abuse material under the Australian Act has two key elements. The eSafety Commissioner has to be satisfied of both before issuing a removal notice:

Intention Element: an ordinary reasonable person would conclude that it is likely that the material was intended to have an effect of causing serious harm to a particular Australian adult.

Offense Element: an ordinary reasonable person in the position of the Australian adult would regard the material as being, in all the circumstances, menacing, harassing or offensive.

Serious harm is defined as serious physical harm or serious harm to a person’s mental health, whether temporary or permanent. Serious harm to a person’s mental health includes:

(a) serious psychological harm; and

(b) serious distress;

but does not include mere ordinary emotional reactions such as those of only distress, grief, fear or anger.

The need to assess what an ‘ordinary reasonable person’ would think is common to both elements. For the Intention Element the Ordinary Reasonable Person has to determine the likely intention of the person who posted the material. For the Offense Element, in order to determine how the material should be regarded, the Ordinary Reasonable Person has to be put in the position of the Australian adult putatively intended to be targeted.

The reason why the legislation hypothesises an Ordinary Reasonable Person is to inject some objectivity into what could otherwise be an overly subjective test.

The Tribunal observed that the Intention Element converted what would otherwise be “a broadly available censorship tool based on emotional responses to posted material” into a provision that “protects people from a much narrower form of conduct where causing serious harm to a particular person was, in the relevant sense, intended” [21]. (This has similarities to the heavy lifting done by the mental element in broadly drafted terrorism offences.)

We are in familiar legal territory with fictive characters such as the Ordinary Reasonable Person. It is reminiscent of the fleeting appearance of the Person of Ordinary Sensibilities in the draft UK Online Safety Bill.

Nevertheless, as the Tribunal decision illustrates, the attributes of the hypothetical person may need further elucidation. Those characteristics can materially affect the balance between freedom of expression and the protective elements of the legislation in question.

Thus, what is the Ordinary Reasonable Person taken generally to know?  What information can the Ordinary Reasonable Person look at in deciding whether intention to cause serious harm is likely? How likely is likely?

The information available to the Ordinary Reasonable Person

The question of what information can, or should, be taken into account is especially pertinent to legislation that requires moderation decisions to be made that will impinge on freedom of expression. The Tribunal posed the question thus:

“… whether findings on the Intention Element should be made on an impressionistic basis after considering a limited range of material, or whether findings should be made after careful consideration, having regard to any evidence obtained as part of any investigation or review process.” [45]

It found that:

“The history and structure of the provisions suggest that while impressionistic decision-making may be authorised in the first instance, early decisions made on limited information can and should be re-visited both internally and externally as more information becomes available, including as a result of input from the affected end-user.” [45]

That was against the background that:

“…the legislation as passed allows for rapid decision making by the Commissioner to deal with material that appears, on its face, to be within a category that the Act specified could be the subject of a removal notice. However, once action has been taken, the insertion of s 220A confirms that Parliament accepted that there needed to be an opportunity for those affected by the action to have an opportunity to address whether the material was actually within the prohibited statutory category. External review by the Tribunal was provided for with the same end in mind.” [44]

The UK Online Safety Act states that a platform making an illegality judgement should do so on the basis of all relevant information reasonably available to it. Ofcom guidance fleshes out what information is to be regarded as reasonably available.

The UK Crime and Policing Bill says nothing about what information a police officer giving an unlawful weapons content removal notice, or a senior officer reviewing such a notice, should seek out and take into account. Nor does it provide any opportunity for the user whose content is condemned to make representations, or to be notified of the decision.

Generally speaking, the less information that can or should be taken into account, the greater the likelihood of arbitrary decision-making and consequent violation of freedom of expression rights.

In the Elston case three different variations on the Ordinary Reasonable Person were put to the Tribunal. The eSafety Commissioner argued that the Ordinary Reasonable Person should be limited to considering the poster’s profile on X and the material constituting the post. The poster’s subsequent evidence about his intention and motivations was irrelevant to determining whether the Intention Element was satisfied. The same was said to apply to evidence about the poster’s knowledge of the Australian person said to be targeted. (The Tribunal observed that that would mean that even material contained in the complaint that preceded the removal notice would be excluded from consideration.)

As to the general knowledge of the Ordinary Reasonable Person, the eSafety Commissioner argued that (for the purposes of the case before the Tribunal, which concerned a post linking to and commenting on a newspaper article about a transgender person) the Ordinary Reasonable Person would be aware that material on X can bully individuals, would understand that public discourse around sexuality and gender can be polarising as well as emotionally charged; and would understand that calling a transgender man a woman would be to act contrary to that transgender man’s wishes.

X Corp argued that the decisionmaker was entitled to have regard to evidence (including later evidence) concerning immediate context as at the time of the post, but not more. The facts which could be known to the ordinary reasonable person when making their assessment included facts about the subject of the post or the poster, what their relationship was at the time of the post, but not evidence about what happened after.

The significance of the different positions was that on X Corp’s case, later evidence could be taken into account to the effect that the poster did not know, or know of, the person who was the subject of the post until he read the newspaper article. That was not apparent from the post itself or the poster’s profile.

Mr Elston (the poster) argued that a wide range of material could be acquired and treated as available to the ordinary reasonable person when asked to decide whether the material posted ‘was intended to have an effect of causing serious harm’.

On this view of the statutory power, evidence obtained before or after the post, during the course of the investigation and concerning matters that occurred after the post was made, could be treated as available to the Ordinary Reasonable Person when considering the Intention Element.

On this approach, Mr Elston’s own evidence about his intention would be “relevant to consider, but not necessarily conclusive of what an ordinary reasonable person would conclude about his intention.” [62]

The Tribunal agreed with Mr Elston’s approach:

“The existence of the investigative powers available to the Commissioner and the complaint-based nature of the power provide a powerful basis for concluding that the Commissioner and the Tribunal should be feeding all of the available evidence into the assessment of what the ‘ordinary reasonable person’ would conclude was likely before determining whether the Intention Element is satisfied.” [74]

It added:

“The Parliament was concerned to give end-users an opportunity to address claims about their conduct both on internal review and by providing review in the Tribunal. To read the ordinary reasonable person lens as a basis for disregarding evidence submitted by either the complainant or the end-user or discovered by the Commissioner during an investigation is not consistent with the fair, high quality decision-making the Parliament made provision for.” [77]

The Tribunal then spelled out the consequences of the Commissioner’s approach:

“…In many circumstances, including this case, limiting the information that can be considered by the ‘ordinary reasonable person’ to the post and closely related material, results in critical information not being available.” [81]

It went on:

“In this case, there is no evidence in any of the material posted and associated with the post, that the post was ever brought to the attention of Mr Cook [the complainant]. …

That Mr Cook was aware of the post is only discoverable by reference to the complaint submitted to the Commissioner. If a decision maker is restricted to knowing that a post was made to a limited audience, none of whom included Mr Cook, reaching the conclusion that the material was intended to cause serious harm to Mr Cook is going to be difficult. In those circumstances, where there appears to be no evidence to which the decision maker can have regard in order to make a finding that the post came to Mr Cook’s attention, let alone was intended to come to his attention, a decision to issue a removal notice could not be sustained.” [81]

The Tribunal reiterated:

“In many cases, it will be the complaint that provides critical context to allow an ordinary reasonable person to conclude that serious harm was intended.” [81]

The Tribunal concluded that evidence about what happened after the post was posted could be relevant if it shed light on the likely intention of the poster. Similarly, evidence about prior behaviour of third parties in response to certain posts could be relevant, even if it was only discoverable by the regulator using compulsory powers:

“So long as evidence sheds light on the statutory question, then it can and should be considered. It would be inappropriate in advance of a particular factual scenario being presented to the decision-maker to say that there are whole categories of evidence that cannot be considered because the statutory test in all circumstances renders the material irrelevant.” [87]

Nevertheless, that did not mean that the concept of the ‘ordinary and reasonable’ person had no effect:

“It moves the assessment away from a specific factual inquiry concerning the actual thought process of the poster and what effect they intended to achieve by the post. I must undertake a more abstract inquiry about what an independent person (who isn’t me) would think was the poster’s intention having regard to the available evidence. Provided evidence is relevant to that question, then it can and should be considered.” [89]

Whilst specific to the Australian statute and its fictive Ordinary Reasonable Person, this discussion neatly illustrates the point that has repeatedly been made (and often ignored): that platform judgements as to illegality required by the UK Online Safety Act will very often require off-platform contextual information and cannot sensibly be made on the basis of a bare user post and profile.

The point assumes greater significance with real-time proactive automated content moderation – something that Ofcom is proposing to extend – which by its very nature is unlikely to have access to off-platform contextual information.

The discussion also speaks eloquently to the silence of the Crime and Policing Bill on what kind and depth of investigation a police officer should conduct in order to be satisfied as to the presence of unlawful weapons content.

Likelihood of serious harm

The other significant point that the Tribunal had to consider was what the statute meant by ‘likely’ that serious harm was intended. The rival contentions were ‘real chance’ and ‘more probable than not’. The Tribunal held that, in the statutory context, the latter was right. The conclusion is notable for acknowledging the adverse consequences for freedom of expression of adopting a lower standard:

“A finding by the ordinary reasonable person that a person was setting out to cause serious harm to another is a serious, adverse finding with implications for freedom of expression. It is not the kind of finding that should be made when it is only possible that serious harm was intended.” [119]

The standard set by the UK Online Safety Act for making content illegality judgements is “reasonable grounds to infer”. It remains questionable, to say the least, whether that standard is compatible with ECHR Article 10. The Crime and Policing Bill says no more than that the police officer must be ‘satisfied’ that the material is unlawful weapons content.  

The Tribunal’s conclusion

On the facts of the case, the Tribunal concluded that an ordinary reasonable person in the position of the complainant Mr Cook would regard the post as offensive; but that the Intention Element was not satisfied. That depended crucially on the broader contextual evidence:

“Read in isolation, the post looks to be an attempt to wound Mr Cook and upset him and cause him distress, perhaps even serious distress. If an ordinary reasonable person was only aware of the post, then it may be open to find that the poster’s intention was likely to be to cause serious harm to Mr Cook. However, when the broader context is known and understood, it is difficult to read the post as intended to harm Mr Cook, or intended to have others direct criticism towards Mr Cook or designed to facilitate vitriol by spreading personal information about him.” [191]

Amongst the broader context was lack of evidence that the poster intended the post to come to Mr Cook’s attention.

“For the post to do any harm it needed to be read by Mr Cook. While I am satisfied that Mr Elston was indifferent to whether the post did come to Mr Cook’s attention and indifferent to whether or not it distressed him, there is no evidence to support the conclusion that the post was made with the intention of it being brought to Mr Cook’s attention.” [197]

Part of the reasoning behind that conclusion was that Mr Elston’s post did not tag Mr Cook’s user handle, but only that of the World Health Organisation (which had appointed Mr Cook to an advisory panel):

“ It is notable that Mr Elston only included the handle for the WHO in his post and there is nothing in the body of the post that attempts to facilitate the contacting of Mr Cook by Mr Elston’s followers. Mr Cook’s name is not used in the body of the post.” [200]

Overall, the Tribunal concluded:

“When the evidence is considered as a whole I am not satisfied that an ordinary reasonable person would conclude that by making the post Mr Elston intended to cause Mr Cook serious harm. In the absence of any evidence that Mr Elston intended that Mr Cook would receive and read the post, and in light of the broader explanation as to why Mr Elston made the post, I am satisfied that an ordinary reasonable person would not conclude that that it is likely that the post was intended to have an effect of causing serious harm to Mr Cook.” [207]

For present purposes the actual result in the Elston case matters less than the illustration that it provides of what can be involved in making judgements about removal or blocking of posts against a statutory test: whether that evaluation be done by a regulator, a platform discharging a duty imposed by statute or (in the likely future case of unlawful weapons content) the police.


Monday, 1 November 2021

The draft Online Safety Bill: systemic or content-focused?

One of the more intriguing aspects of the draft Online Safety Bill is the government’s insistence that the safety duties under the draft Bill are not about individual items of content, but about having appropriate systems and processes in place; and that this is protective of freedom of expression.

Thus in written evidence to the Joint Parliamentary Committee scrutinising the draft Bill the DCMS said:

“The regulatory framework set out in the draft Bill is entirely centred on systems and processes, rather than individual pieces of content, putting these at the heart of companies' responsibilities.

The focus on robust processes and systems rather than individual pieces of content has a number of key advantages. The scale of online content and the pace at which new user-generated content is uploaded means that a focus on content would be likely to place a disproportionate burden on companies, and lead to a greater risk of over-removal as companies seek to comply with their duties. This could put freedom of expression at risk, as companies would be incentivised to remove marginal content. The focus on processes and systems protects freedom of expression, and additionally means that the Bill’s framework will remain effective as new harms emerge.

The regulator will be focused on oversight of the effectiveness of companies’ systems and processes, including their content moderation processes. The regulator will not make decisions on individual pieces of content, and will not penalise companies where their moderation processes are generally good, but inevitably not perfect.”   

The government appears to be arguing that since a service provider would not automatically be sanctioned for a single erroneous removal decision, it would tend to err on the side of leaving marginal content up. Why such an incentive would operate only in the direction of under-removal, when the same logic would apply to individual decisions in either direction, is unclear.

Be that as it may, elsewhere the draft Bill hardwires a bias towards over-removal into the illegal content safety duty: by setting the threshold at which the duty bites at ‘reasonable grounds to believe’ that the content is illegal, rather than actual illegality or even likelihood of illegality.

The government’s broader claim is that centreing duties on systems and processes results in a regulatory regime that is not focused on individual pieces of content at all. This claim merits close scrutiny.

Safety duties, in terms of the steps required to fulfil them, can be of three kinds: 

  • Non-content. Duties with no direct effect on content at all, such as a duty to provide users with a reporting mechanism.
  • Content-agnostic. This is a duty that is independent of the kind of content involved, but nevertheless affects users’ content. By its nature a duty that is unrelated to (say) the illegality or harmfulness of content will tend to result in steps being taken (‘friction’ devices, for instance, or limits on reach) that would affect unobjectionable or positively beneficial content just as they affect illegal or legal but harmful content.
  • Content-related. These duties are framed specifically by reference to certain kinds of content: in the draft Bill, illegal, harmful to children and harmful to adults. Duties of this kind aim to affect those kinds of content in various ways, but carry a risk of collateral damage to other content.

In principle a content-related duty could encompass harm caused either by the informational content itself, or by the manner in which a message is conveyed. Messages with no informational content at all can cause harm: repeated silent telephone calls can instil fear or, at least, constitute a nuisance; flashing lights can provoke an epileptic seizure.  

The government’s emphasis on systems and processes to some extent echoes calls for a ‘systemic’ duty of care. To quote the Carnegie UK Trust’s evidence to the Joint Scrutiny Committee, arguing for a more systemic approach:

“To achieve the benefits of a systems and processes driven approach the Government should revert to an overarching general duty of care where risk assessment focuses on the hazards caused by the operation of the platform rather than on types of content as a proxy for harm.” 

A systemic duty would certainly include the first two categories of duty: non-content and content-agnostic.  It seems inevitable that a systemic duty would also encompass content-related duties. While steps taken pursuant to a duty may range more broadly than a binary yes/no content removal decision, that does not detract from the inevitable need to decide what (if any) steps to take according to the kind of content involved.

Indeed it is notable how rapidly discussion of a systemic duty of care tends to move on to categories of harmful content, such as hate speech and harassment. Carnegie’s evidence, while criticising the draft Bill’s duties for focusing too much on categories of content, simultaneously censures it for not spelling out for the ‘content harmful to adults’ duty how “huge volumes of misogyny, racism, antisemitism etc – that are not criminal but are oppressive and harmful – will be addressed”.

Even a wholly systemic duty of care has, at some level and at some point – unless everything done pursuant to the duty is to apply indiscriminately to all kinds of content - to become focused on which kinds of user content are and are not considered to be harmful by reason of their informational content, and to what degree.

To take one example, Carnegie discusses repeat delivery of self-harm content due to personalisation systems. If repeat delivery per se constitutes the risky activity, then inhibition of that activity should be applied in the same way to all kinds of content. If repeat delivery is to be inhibited only, or differently, for particular kinds of content, then the duty additionally becomes focused on categories of content. There is no escape from this dichotomy.

It is possible to conceive of a systemic safety duty expressed in such general terms that it would sweep up anything in the system that might be considered capable of causing harm (albeit - unless limited to risk of physical injury - it would still inevitably struggle, as does the draft Bill, with the subjective nature of harms said to be caused by informational content). A systemic duty would relate to systems and processes that for whatever reason are to be treated as intrinsically risky.

The question that then arises is what activities are to be regarded as inherently risky. It is one thing to argue that, for instance, some algorithmic systems may create risks of various kinds. It is quite another to suggest that that is true of any kind of U2U platform, even a simple discussion forum. If the underlying assumption of a systemic duty of care is that providing a facility in which individuals can speak to the world is an inherently risky activity, that (it might be thought) upends the presumption in favour of speech embodied in the fundamental right of freedom of expression.

The draft Bill – content-related or not?

To what extent are the draft Bill’s duties content-related, and to what extent systemic?

Most of the draft Bill’s duties are explicitly content-related. They mean to cover online user content that is illegal or harmful to adults or children. To the extent that, for instance, the effect of algorithms on the likelihood of encountering content has to be considered, that is in relation to those kinds of content.

For content-related duties the draft Bill draws no obvious distinction between informational and non-informational causes of harm. So risk of physical injury as a result of reading anti-vax content is treated indistinguishably from risk of an epileptic seizure as a result of seeing flashing images.

The most likely candidates in the draft Bill for content-agnostic or non-content duties are Sections 9(2) and 10(2)(a). For illegal content S.9(2) requires the service provider to “take proportionate steps to mitigate and effectively manage the risks of harm to individuals”, identified in the service provider’s most recent S.7(8) illegal content risk assessment. S.10(2) contains a similar duty in relation to harm to children in different age groups, based on the most recent S.7(9) children’s risk assessment.

Although the S.7 risk assessments are about illegal content and content harmful to children, neither of the 9(2) and 10(2)(a) safety duties is expressly limited to harm arising from those kinds of content.

Possibly, those duties are intended to relate back to Sections 7(8)(e) and 7(9)(e) respectively. Those require risk assessments of the “different ways in which the service is used, and the impact that has on the level of risk of harm that might be suffered” by individuals or children respectively – again without expressly referring to the kinds of content that constitute the subject-matter of Sections 7(8) and 7(9).  However, to deduce a pair of wholly content-agnostic duties in Sections 9(2) and 10(2)(a) would seem to require those S.7 risk assessment factors to be considered independently of their respective contexts.

Whatever may be the scope of S.9(2) and 10(2)(a), the vast majority of the draft Bill’s safety duties are drafted expressly by reference to in-scope illegal or legal but harmful content. Thus, for example, the government notes at para [34] of its evidence:

“User-to-user services will be required to operate their services using proportionate systems and processes to minimise the presence, duration and spread of illegal content and to remove it swiftly once they are aware of it.” (emphasis added)

As would be expected, those required systems and processes are framed by reference to a particular type of user content. The same is true for duties that apply to legal content defined as harmful to adults or children.

The Impact Assessment accompanying the draft Bill states:

“…it is expected that undertaking additional content moderation (through hiring additional content moderators or using automated moderation) will represent the largest compliance cost faced by in-scope businesses.” (Impact Assessment [166])

That compliance cost is estimated at £1.7 billion over 10 years. That does not suggest a regime that is not focused on content.

Individual user content

The contrast drawn by the government is between systems and processes on the one hand, and “individual” pieces of content on the other.

The draft Bill defines harm as physical or psychological harm. What could result in such harm? The answer, online, can only be individual user content: that which, whether alone or in combination, singly or repeated, we say and see online. Various factors may influence, to differing extents, what results in which user content being seen by whom: user choices such as joining discussion forums and channels, choosing topics, following each other, rating each other’s posts and so on, or platform-operated recommendation and promotion feeds. But none of that detracts from the fact that it is what is posted – items of user content – that results in any impact.

The decisions that service providers would have to make – whether automated, manual or a combination of both – when attempting to implement content-related safety duties, inevitably concern individual items of user content. The fact that those decisions may be taken at scale, or are the result of implementing systems and processes, does not change that.

For every item of user content putatively subject to a filtering, take-down or other kind of decision, the question for a service provider seeking to discharge its safety duties is always what (if anything) should be done with this item of content in this context? That is true regardless of whether those decisions are taken for one item of content, a thousand, or a million; and regardless of whether, when considering a service provider’s regulatory compliance, Ofcom is focused on evaluating the adequacy of its systems and processes rather than with punishing service providers for individual content decision failures.

A platform duty of care has been likened to an obligation to prevent risk of injury from a protruding nail in a floorboard. The analogy is flawed, but even taking that analogy at face value the draft Bill casts service providers in the role of hammer, not nail. The dangerous nail is users’ speech. Service providers are the tool chosen to hammer it into place. Ofcom directs the use of the tool. Whether an individual strike of the hammer may or may not attract regulatory sanction is a matter of little consequence to the nail.

Even if Ofcom would not be involved in making individual content decisions, it is difficult to see how it could avoid at some point evaluating individual items of content. Thus the provisions for use of technology notices require the “prevalence” of CSEA and/or terrorism content to be assessed before serving a notice. That inevitably requires Ofcom to assess whether material present on the service does or does not fall within those defined categories of illegality.

More broadly, it is difficult to see how Ofcom could evaluate for compliance purposes the proportionality and effectiveness of filtering, monitoring, takedown and other systems and processes without considering whether the user content affected does or does not qualify as illegal or harmful content. That would again require a concrete assessment of at least some actual items of user content.

It is not immediately obvious why the government has set so much store by the claimed systemic nature of the safety duties. Perhaps it thinks that by seeking to distance Ofcom from individual content decisions it can avoid accusations of state censorship. If so, that ignores the fact that service providers, via their safety duties, are proxies for the regulator. The effect of the legislation on individual items of user content is no less concrete because service providers are required to make decisions under the supervision of Ofcom, rather than if Ofcom were wielding the blue pencil, the muffler or the content warning generator itself. 


Wednesday, 16 June 2021

Carved out or carved up? The draft Online Safety Bill and the press

When he announced the Online Harms White Paper in April 2019 the then Culture Secretary, Jeremy Wright QC, was at pains to reassure the press that the proposed regulatory regime would not impinge on press freedom. He wrote in a letter to the Society of Editors:

“where these services are already well regulated, as IPSO and IMPRESS do regarding their members' moderated comment sections, we will not duplicate those efforts. Journalistic or editorial content will not be affected by the regulatory framework.”

The last sentence, at any rate, always seemed like an impossible promise to fulfil. The government’s subsequent attempts to live up to it have resulted in some of the more inscrutable elements of the draft Online Safety Bill. 

Carve-out for news publisher content

It is true that ‘news publisher content’ is carved out of the safety duties that would be imposed on user to user and search services.  The exemption is intended to address the problem that a news publisher’s feed on, for instance, a social media site would constitute user generated content. As such, without an exemption it would be directly affected by the social media platform’s own duty of care and indirectly regulated by Ofcom.

However, a promise not to affect journalistic or editorial content goes further than that. First, the commitment is not limited to broadcasters or newspapers regulated by IPSO or IMPRESS.  Second, as we shall see, a regulatory framework may still have an indirect effect on content even if the content is carved out of the framework.

Furthermore, even trying to exclude direct effect gives rise to a problem. If you want to carve out the press, how do you do so without giving the government (or Ofcom) power to decide who does and does not qualify as the press? If a state organ draws that line, isn’t the resulting official list in itself an exercise in press regulation? We shall see how the draft Bill has tried to solve this conundrum.

Beneath the surface of the draft Bill lurks a foundational challenge. Its underlying premise is that speech is potentially dangerous, and those that facilitate it must take precautionary steps to mitigate the danger. That is the antithesis of the traditional principle that, within boundaries set by clear and precise laws, we are free to speak as we wish. The mainstream press may comfort themselves that this novel approach to speech is (for the moment) being applied only to the evil internet and to the unedited individual speech of social media users; but it is an unwelcome concept to see take root if you have spent centuries arguing that freedom of expression is not a fundamental risk, but a fundamental right.

Even the most voluble press advocates of imposing a duty of care on internet platforms have offered what seems a slightly muted welcome to these aspects of the draft Bill. Lord Black, in the House of Lords on 18 May 2021, (after declaring his interest as deputy chairman of the Telegraph Media Group) said:

“The draft Bill includes a robust and comprehensive exemption for news publishers from its framework of statutory regulation … . That is absolutely right. During pre-legislative scrutiny of the Bill, we must ensure that this exemption is both watertight and practical so that news publishers are not subject to any form of statutory control, and that there is no scope for the platforms to censor legitimate content.”

One might ask what constitutes ‘legitimate’ content and who - if not the platforms – would decide. Ofcom? At any rate the draft Bill will disappoint anyone hoping for a duty of care regime that could not have any effect at all on news publisher content. It is difficult to see how things could be otherwise, the former Culture Secretary’s promise notwithstanding.

The draft Bill

Now we can embark on a tour of the draft Bill’s attempts to square the circle of delivering on the former Secretary of State’s promise. First, a diagram.


Got that? Probably not.

So let us conduct a point by point examination of how the draft Bill tries to exclude the press from its regulatory ambit, and consider how far it succeeds. The News Media Association’s submission to the White Paper consultation, to which I will refer, contained a list of what the NMA thought the legislation should do in order to carve out the press. Unsurprisingly, the draft Bill falls short.

But first, a note on terminology: it is easy to slip into using ‘platforms’ to describe those organisations in scope. We immediately think of Facebook, Twitter, YouTube, TikTok, Instagram and the rest. But it is not only about them: the government estimates that 24,000 companies and organisations will be in scope. That is everyone from the largest players to an MP’s discussion app, via Mumsnet and the local sports club discussion forum. So, in an effort not to lose sight of who is in scope, I shall adopt the dismally anodyne ‘U2U provider’.

Moderated comments sections

The first limb of the Secretary of State’s commitment was to avoid duplicating existing regulation of moderated comments sections on newspapers’ own websites. That has been achieved not by a press-specific exemption, but through the draft Bill’s general exclusion of low risk ‘limited functionality’ services. This provision exempts services in which users are able to communicate only in the following ways: posting comments or reviews relating to content produced or published by the provider of the service (or by a person acting on behalf of the provider), and in various specified related ways (such as ‘like’ or ‘dislike’ buttons).

This exemption as drafted has problems, since technically (even if not contractually) a user is able to post anything to a non-proactively moderated free text review section. That could comments on comments – a degree of freedom which of itself appears to be disqualifying - even if the intended purpose is that the facility should be used only for reviewing the provider’s own content.

As for the protection that the exemption tangentially offers to comments sections on press websites, it is notable that it can be repealed or amended by secondary legislation, if the Secretary of State considers that to be appropriate because of the risk of physical or psychological harm to individuals in the UK presented by a service of the description in question.

News publisher content – what is it?

News publisher content present on a service is exempted from the service provider’s safety duties. There are two primary categories of news publisher content: that generated by UK-regulated broadcasters and that generated by other recognised news publishers. The latter have to meet a number of qualifying conditions, both administrative and substantive.

Administrative conditions

Administratively, a putative recognised news publisher must:

    (a)   Be an entity (i.e. an incorporated or unincorporated body or association of persons or an organisation)  

    (b) have a registered office or other business address in the UK

    (c) be the person with legal responsibility for material published by it in the UK

    (d) publish (by any means including broadcasting) the name address, and registered number    (if any) of the entity; and publish the name and address (and where relevant, registered or principal office and registered number) of any person who controls the entity (control meaning the same as in the Broadcasting Act).

Failure to meet any of these conditions would be fatal to an argument that the entity’s output qualified as news publisher content.

Organisations proscribed under the Terrorism Act 2000, or the purpose of which is to support a proscribed organisation, are expressly excluded from the news publisher exemption.

Substantive conditions

Substantively, the entity must:

    (a) Have as its principal purpose the publication of news-related material, such material being created by different persons and being subject to editorial control.

    (b) Publish such material in the course of a business (whether or not carried on with a view to profit)

    (c) Be subject to a standards code (one published either by an independent regulator or by the entity itself)

    (d) Have policies and procedures for handling and resolving complaints.

Again, failure to meet any of these conditions would be fatal.

‘News-related material’ has the same definition as in the Crime and Courts Act 2003:

    (a) News or information about current affairs

    (b) Opinion about matters relating to the news or current affairs; or

    (c) Gossip about celebrities, other public figures or other persons in the news.

News-related material is ‘subject to editorial control’ if there is a person (whether or not the publisher of the material) who has editorial or equivalent responsibility for the material, including responsibility for how it is presented and the decision to publish it.

Reposted news publisher material

The draft Bill also contains limited exemptions for news publisher content reposted by other users. To qualify, the material must be uploaded to or shared on the service by a user of the service, and:

    (a) Reproduce in full an article or written item originally published by a recognised news publisher (but not be a screenshot or photograph of that article or item or of part of it);

    (b) Be a recording of an item originally broadcast by a recognised news publisher (but not be an excerpt of such a recording); or

    (c) Be a link to a full article or written item originally published, or to a full recording of an item originally broadcast, by a recognised news publisher.

What isn’t exempted?

What news-related content would fall outside the exemptions from the U2U provider’s safety duties? Some of the most relevant are:

  • The user reposting exemption does not apply to quotations, snippets, excerpts, screenshots and the like.
  • Content from non-UK news publishers will not be exempt unless they are able to jump through the administrative and substantive hoops described above.  The requirement to have a registered office or other business address in the UK would itself seem likely to exclude the vast majority of non-UK news providers.
  • Individual journalist accounts. Many well known broadcast and news journalists have their own Twitter or other social media accounts and make use of them prolifically to report on current news. These are outside the primary exemption, since an individual journalist is not a recognised news publisher. (Some of what individual journalists do would, of course, fall within the re-posting exemption.) The NMA argued that the exemption must apply to “the news publishers, corporately and individually to all their workforce and contributors”.

One opaque aspect of the exemption is what is meant by content “generated” by a recognised news publisher. If a newspaper publishes a story incorporating an embedded link to a TikTok video (as the Daily Mail did recently with the video from a migrant boat crossing the Channel), is the link part of the content generated by the news publisher? If so, is it anomalous that the story – including the embedded video - on the news publisher’s own site, subsequently posted to (say) Twitter, is exempt from Twitter’s safety duty, yet the same video originally posted on TikTok is still within scope of TikTok’s safety duty?

The example of amateur video uploaded from a migrant boat brings us neatly to the topic of citizen journalism. Citizen journalism is within scope of U2U providers’ safety duties and, for ordinary U2U providers, enjoys no special status over and above any other user generated content. 

Large players (Category 1 providers) will have a variety of freedom of expression duties imposed on them, applicable to UK-linked news publisher content or journalistic content, as well some duties in respect of so-called content of democratic importance. The duties will include, for instance, an obligation to specify in terms and conditions by what method journalistic content is to be identified. Since the draft Bill says only that journalistic content is content ‘generated for the purposes of journalism’, identifying such content looks like a tall order.

The journalistic content provisions are likely to run into criticism from opposing ends: on the one hand that some users will rely on them as a smokescreen to protect what is in reality non-journalistic material; and that on the other hand, the concept is too vague to be of real use, so in practice hands the decision on how to categorise to Ofcom. 

What is the significance of news publisher content being exempted?

The news publisher content exemption means that U2U providers do not have a safety duty for news publisher content. In other words, they are not obliged to include news publisher content in the various steps that they are required to take to fulfil their safety duties.

That does not mean that news publisher content could not be affected as a by-product of U2U providers' attempts to discharge their safety duties over other user content. U2U providers not being required proactively to monitor and inhibit news publisher content doesn’t mean that such content couldn’t be caught up in a provider’s efforts to do that for user generated content generally.

Lord Black spoke of precluding any scope for platforms to censor legitimate content. The closest the draft Bill’s general provisions come is the duty 'to have regard to the importance of freedom of expression’. For Category 1 providers the focus is additionally on dedicated, expedited complaints procedures and transparency of terms and conditions. 

The Impact Assessment concludes, under Freedom of Expression, that the regulatory model’s focus on transparency and user reporting and redress should lead to “some improvements” in users’ ability to appeal content removal and get this reinstated, “with a positive impact on freedom of expression”.

The Policy Risks table annexed to the Impact Assessment goes into more detail:

Risk

Mitigation

Regulation disproportionately impacts on freedom of expression, by incentivising or requiring content takedown.

The approach has built in appropriate safeguards to ensure protections for freedom of expression, including:

● Differentiated approach of legal/illegal content, e.g. not requiring takedown of legal but harmful content

● Safeguards for journalistic content

● Effective transparency reporting

● Proportionate enforcement sanctions to avoid incentivising takedowns

● User redress mechanisms will enable challenge to takedown

● Super-complaints will allow organisations to lodge complaints where they may be concerned about disproportionate impacts

● Regulator has a duty to consider freedom of expression

 

The Impact Assessment summarises the government’s final policy position thus:

“There will … be strong safeguards in place to ensure media freedom is upheld. Content and articles published by news media on their own sites will not be considered user generated content and thus will be out of regulatory scope.

Legislation will also include robust protections for journalistic content on in-scope services. Firstly, the legislation will provide a clear exemption for news publishers’ content. This means platforms will not have any new legal duties for these publishers’ content as a result of our legislation. Secondly, the legislation will oblige Category 1 companies to put in place safeguards for all journalistic content shared on their platforms. The safeguards will ensure that platforms consider the importance of journalism when undertaking content moderation, and can be held to account for the removal of journalistic content, including with respect to automated moderation tools.”

At the moment it is anyone’s guess what the various duties would mean when crystallised into practical requirements – a vice ingrained throughout the draft Bill. We will know only when Ofcom, however many years down the line, produces its series of safety Codes of Practice for the various different kinds of U2U service. A U2U provider would (unless it decides to take the brave route of claiming compliance with the safety duties in ways other than those set out in a Code of Practice) have to comply with whatever the applicable Code of Practice may say about freedom of expression.

If Ofcom were to go down the route of suggesting in a Code of Practice that news publisher content should be walled off from being indirectly affected by implementation of the providers’ safety duties, how could that be achieved? The spectre of an Ofcom-approved list of news publisher content providers rears its head again.

Even if there were such a list, how would such content be identified and separated out in practice? The NMA consultation submission suggested a system of ‘kite marking’. IT engineers could still be trying to build tagging systems to make that work in ten years’ time.

The government’s draft Online Safety Bill announcement claimed that the measures required of ordinary and large providers would “remove the risk that online companies adopt restrictive measures or over-remove content in their efforts to meet their new online safety duties.” (emphasis added)

This bold statement – in contrast with the more modest claim in the Impact Assessment - shows every sign of being another unfulfillable promise, whether for news publisher content or user-generated content generally.

Lord Black said in the Lords debate:

“We have the opportunity with this legislation to lead the world in ensuring proper regulation of news content on the internet, and to show how that can be reconciled with protecting free speech and freedom of expression. It is an opportunity we should seize.”

It can be no real surprise that a solution to squaring that circle is as elusive now as when the Secretary of State wrote to the Society of Editors two years ago. It has every prospect of remaining so.



Sunday, 5 May 2019

The Rule of Law and the Online Harms White Paper

Before the publication of the Online Harms White Paper on 8 April 2019 I proposed a Ten Point Rule of Law test to which it might usefully be subjected.

The idea of  the test is less to evaluate the substantive merits of the government’s proposal – you can find an analysis of those here – but more to determine whether it would satisfy fundamental rule of law requirements of certainty and precision, without which something that purports to be law descends into ad hoc command by a state official.

Here is an analysis of the White Paper from that perspective. The questions posed are whether the White Paper demonstrates sufficient certainty and precision in respect of each of the following matters.
1.    Which operators are and are not subject to the duty of care
The White Paper says that the regulatory framework should apply to “companies that allow users to share or discover user-generated content, or interact with each other online.”
This is undoubtedly broad, but on the face of it is reasonably clear.  The White Paper goes on to provide examples of the main types of relevant service:
-             Hosting, sharing and discovery of user-generated content (e.g. a post on a public forum or the sharing of a video).
-             Facilitation of public and private online interaction between service users (e.g. instant messaging or comments on posts).
However these examples introduce a significant element of uncertainty. Thus, how broad is ‘facilitation’? The White Paper gives a clue when it mentions ancillary services such as caching. Yet it is difficult to understand the opening definition as including caching.
The White Paper says that the scope will include “social media companies, public discussion forums, retailers that allow users to review products online, along with non-profit organisations, file sharing sites and cloud hosting providers.”  In the Executive Summary it adds messaging services and search engines into the mix. Although the White Paper does not mention them, online games would clearly be in scope as would an app with social or discussion features.
Applicability to the press is an area of significant uncertainty. Comments sections on newspaper websites, or a separate discussion forum run by a newspaper such as in the Karim v Newsquest case would on the face of it be in scope. However, in a letter to the Society of Editors the Secretary of State has said:
“… as I made clear at the White Paper launch and in the House of Commons, where these services are already well regulated, as IPSO and IMPRESS do regarding their members' moderated comment sections, we will not duplicate those efforts. Journalistic or editorial content will not be affected by the regulatory framework.”
This exclusion is nowhere stated in the White Paper. Further, it does not address the fact that newspapers are themselves users of social media. They have Facebook pages and Twitter accounts, with links to their own websites. As such, their own content is liable to be affected by a social media platform taking action to suppress user content in performance of its duty of care.
The verdict on this section might have been ‘extremely broad but clearly so’. However the uncertainty introduced by ‘facilitation’, and by the lack of clarity about newspapers, results in a FAIL.
2.      To whom the duty of care is owed
The answer to this appears to be ‘no-one’. That may seem odd, especially when Secretary of State Jeremy Wright referred in a recent letter to the Society of Editors to “a duty of care between companies and their users”, but what is described in the White Paper is not in fact a duty of care at all.
The proposed duty would not provide users with a basis on which to make a damages claim against the companies for breach, as is the case with a common law duty of care or a statutory duty of care under, say, the Occupiers’ Liability Act 1957.
Nor, sensibly, could the proposed duty do so since its conception of harm strays beyond established duty of care territory of risk of physical injury to individuals, into the highly contestible region of speech harms and then on into the unmappable wilderness of harm to society.
Thus in its introduction to the harms in scope the White Paper starts by referring to online content or activity that ‘harms individual users’, but then goes on: “or threatens our way of life in the UK, either by undermining national security, or by reducing trust and undermining our shared rights, responsibilities to foster integration.”
In the context of disinformation it refers to “undermining our respect and tolerance for each other and confusing our understanding of what is happening in the wider world.”
Whatever (if anything) these abstractions may mean, they are not the kind of thing that can properly be made the subject of a legal duty of care in the offline world sense of the phrase.
The proposed duty of care is something quite different: a statutory framework giving a regulator discretion to decide what should count as harmful, what kinds of behaviour by users should be regarded as causing harm, what rules should be put in place to counter it, and which operators to prioritise.
From a rule of law perspective the answer to the question posed is that it does seem clear that the duty would be owed to no one. In that limited sense it probably rates a PASS, but only by resisting the temptation to change that to FAIL for the misdescription of the scheme as creating a duty of care.
Nevertheless, the fact that the duty is of a kind that is owed to no-one paves the way for a multitude of FAILs for other questions.
3.      What kinds of effect on a recipient will and will not be regarded as harmful
This is an obvious FAIL. The White Paper has its origins in the Internet Safety Strategy Green Paper, yet does not restrict itself to what in the offline world would be regarded as safety issues.  It makes no attempt to define harm, apparently leaving it up to the proposed Ofweb to decide what should and should not be regarded as harmful. Some examples given in the White Paper suggest that effect on the recipient is not limited to psychological harms, or even distress.
This lack of precision is exacerbated by the fact that the kinds of harm contemplated by the White Paper are not restricted to those that have an identifiable effect on a recipient of the information, but appear to encompass nebulous notions of harm to society.
4.      What speech or conduct by a user will and will not be taken to cause such harm
The answer appears to be, potentially, “any”. The WP goes beyond defined unlawfulness into undefined harm, but places no limitation on the kind of behaviour that could in principle be regarded as causing harm. From a rule of law perspective of clarity this may be a PASS, but only in the sense that the kind of behaviour in scope is clearly unlimited.
5.      If risk to a hypothetical recipient of the speech or conduct in question is sufficient, how much risk suffices and what are the assumed characteristics of the notional recipient
FAIL. There is no discussion of either of these points, beyond emphasising many times that children as well as adults should be regarded as potential recipients (although whether the duty of care should mean taking steps to exclude children, or to tailor all content to be suitable for children, or a choice of either, or something else, is unclear). The White Paper makes specific reference to children and vulnerable users, but does not limit itself to those.
6.      Whether the risk of any particular harm has to be causally connected (and if so how closely) to the presence of some particular feature of the platform
FAIL. The White Paper mentions, specifically in the context of disinformation, the much discussed amplification, filter bubble and echo chamber effects that are associated with social media. More broadly it refers to ‘safety by design’ principles, but does not identify any design features that are said to give rise to a particular risk of harm.
The safety by design principles appear to be not about identifying and excluding features that could be said to give rise to a risk of harm, but more focused on designing in features that the regulator would be likely to require of an operator in order to satisfy its duty of care.
Examples given include clarity to users about what forms of content are acceptable, effective systems for detecting and responding to illegal or harmful content, including the use of AI-based technology and trained moderators; making it easy for users to report problem content, and an efficient triage system to deal with reports.
7.      What circumstances would trigger an operator's duty to take preventive or mitigating steps
FAIL. The specification of such circumstances would left up to the discretion of Ofweb, in its envisaged Codes of Practice or, in the case of terrorism or child sexual exploitation and abuse, the discretion of the Home Secretary via approval of OfWeb’s Codes of Practice.
The only concession made in this direction is that the government is consulting on whether Codes of Practice should be approved by Parliament. However it is difficult to conclude that laying the detailed results of a regulator’s ad hoc consideration before Parliament for approval, almost certainly on a take it or leave it basis, has anything like the same democratic or constitutional force as requiring Parliament to specify the harms and the nature of the duty of care with adequate precision in the first place.
8.      What steps the duty of care would require the operator to take to prevent or mitigate harm (or a perceived risk of harm)
The White Paper says that legislation will make clear that companies must do what is reasonably practicable. However that is not enough to prevent a FAIL, for the same reasons as 7. Moreover, it is implicit in the White Paper section on Fulfilling the Duty of Care that the government has its own views on the kinds of steps that operators should be taking to fulfil the duty of care in various areas. This falls uneasily between a statutorily defined duty, the role of an independent regulator in deciding what is required, and the possible desire of government to influence an independent regulator.
9.      How any steps required by the duty of care would affect users who would not be harmed by the speech or conduct in question
FAIL. The White Paper does not discuss this, beyond the general discussion of freedom of expression in the next question.
10.   Whether a risk of collateral damage to lawful speech or conduct (and if so how great a risk of how extensive damage), would negate the duty of care
The question of collateral damage is not addressed, other than implicitly in the various statements that the government’s vision includes freedom of expression online and that the regulatory framework will “set clear standards to help companies ensure safety of users while protecting freedom of expression”.
Further, “the regulator will have a legal duty to pay due regard to innovation, and to protect users’ rights online, taking particular care not to infringe privacy or freedom of expression.” It will “ensure that the new regulatory requirements do not lead to a disproportionately risk averse response from companies that unduly limits freedom of expression, including by limiting participation in public debate.”
Thus consideration of the consequence of a risk of collateral damage to lawful speech it is left up to the decision of a regulator, rather than to the law or a court. The regulator will presumably, by the nature of the proposal, be able to give less weight to the risk of suppressing lawful speech that it considers to be harmful. FAIL.
Postscript It may said against much of this analysis that precedents exist for appointing a discretionary regulator with power to decide what does and does not constitute harmful speech.
Thus, for broadcast, the Communications Act 2003 does not define “offensive or harmful” and Ofcom is largely left to decide what those mean, in the light of generally accepted standards.
Whatever the view of the appropriateness of such a regime for broadcast, the White Paper proposals would regulate individual speech. Individual speech is different. What is a permissible regulatory model for broadcast is not necessarily justifiable for individuals, as was recognised in the US Communications Decency Act case (Reno v ACLU) in the early 1990s. The US Supreme Court found that:
“This dynamic, multi-faceted category of communication includes not only traditional print and news services, but also audio, video and still images, as well as interactive, real-time dialogue. Through the use of chat rooms, any person with a phone line can become a town crier with a voice that resonates farther than it could from any soapbox. Through the use of web pages, mail exploders, and newsgroups, the same individual can become a pamphleteer. As the District Court found, ‘the content on the internet is as diverse as human thought’ ... We agree with its conclusion that our cases provide no basis for qualifying the level of First Amendment scrutiny that should be applied to this medium.’
In these times it is hardly fashionable, outside the USA, to cite First Amendment jurisprudence. Nevertheless, the proposition that individual speech is not broadcast should carry weight in a constitutional or human rights court in any jurisdiction.

Thursday, 18 April 2019

Users Behaving Badly – the Online Harms White Paper

Last Monday, having spent the best part of a day reading the UK government's Online Harms White Paper, I concluded that if the road to hell was paved with good intentions, this was a motorway.

Nearly two weeks on, after full and further consideration, I have found nothing to alter that view. This is why.

The White Paper

First, a reminder of what the White Paper proposes. The government intends to legislate for a statutory ‘duty of care’ on social media platforms and a wide range of other internet companies that "allow users to share or discover user-generated content, or interact with each other online". This could range from public discussion forums to sites carrying user reviews, to search engines, messaging providers, file sharing sites, cloud hosting providers and many others. 

The duty of care would require them to “take more responsibility for the safety of their users and tackle harm caused by content or activity on their services”. This would apply not only to illegal content and activities, but also to lawful material regarded as harmful.

The duty of care would be overseen and enforced by a regulator armed with power to fine companies for non-compliance. That might be an existing or a new body (call it Ofweb).


Ofweb would set out rules in Codes of Practice that the intermediary companies should follow to comply with their duty of care. For terrorism and child sexual abuse material the Home Secretary would have direct control over the relevant Codes of Practice.

Users would get a guaranteed complaints mechanism to the intermediary companies. The government is consulting on the possibility of appointing designated organisations who would be able to make ‘super-complaints’ to the regulator.

Whilst framed as regulation of tech companies, the White Paper’s target is the activities and communications of online users. Ofweb would regulate social media and internet users at one remove. It would be an online sheriff armed with the power to decide and police, via its online intermediary deputies, what users can and cannot say online.

Which lawful content would count as harmful is not defined. The White Paper provides an ‘initial’ list of content and behaviour that would be in scope: cyberbullying and trolling; extremist content and activity; coercive behaviour; intimidation; disinformation; violent content; advocacy of self-harm; promotion of Female Genital Mutilation (FGM).

This is not a list that could readily be transposed into legislation, even if that were the government’s intention. Some of the topics - FGM, for instance – are more specific than others. But most are almost as unclear as ‘harmful’ itself. For instance the White Paper gives no indication as to what would amount to trolling. It says only that ‘cyberbullying, including trolling, is unacceptable’. It could as well have said ‘behaving badly is unacceptable’.

In any event the White Paper leaves the strong impression that the legislation would eschew even that level of specificity and build the regulatory structure simply on the concept of ‘harmful’.

The White Paper does not say in terms how the ‘initial’ list of content and behaviour in scope would be extended. It seems that the regulator would decide:

“This list is, by design, neither exhaustive nor fixed. A static list could prevent swift regulatory action to address new forms of online harm, new technologies, content and new online activities.” [2.2]
In that event Ofweb would effectively have the power to decide what should and should not be regarded as harmful.

The White Paper proposes some exclusions: harms suffered by companies as opposed to individuals, data protection breaches, harms suffered by individuals resulting directly from a breach of cyber security or hacking, and all harms suffered by individuals on the dark web rather than the open internet.


Here is a visualisation of the White Paper proposals, alongside comparable offline duties of care. 

  
Good intentions

The White Paper is suffused with good intentions. It sets out to forge a single sword of truth and righteousness with which to assail all manner of online content from terrorist propaganda to offensive material.

However, flying a virtuous banner is no guarantee that the army is marching in the right direction. Nor does it preclude the possibility that specialised units would be more effective.

The government presents this all-encompassing approach as a virtue, contrasted with:
“a range of UK regulations aimed at specific online harms or services in scope of the White Paper, but [which] creates a fragmented regulatory environment which is insufficient to meet the full breadth of the challenges we face” [2.5].
An aversion to fragmentation is like saying that instead of the framework of criminal offences and civil liability, focused on specific kinds of conduct, that make up our mosaic of offline laws we should have a single offence of Behaving Badly.

We could not contemplate such a universal offence with equanimity. A Law against Behaving Badly would be so open to subjective and arbitrary interpretation as to be the opposite of law: rule by ad hoc command. Assuredly it would fail to satisfy the rule of law requirement of reasonable certainty. By the same token we should treat with suspicion anything that smacks of a universal Law against Behaving Badly Online.

In placing an undefined and unbounded notion of harm at the centre of its proposals for a universal duty of care, the government has set off down that path.

Three degrees of undefined harm

Harm is an amorphous concept. It changes shape according to the opinion of whoever is empowered to apply it: in the government’s proposal, Ofweb.

Even when limited to harm suffered by an individual, harm is an ambiguous term. It will certainly include objectively ascertainable physical injury – the kind of harm to which comparable offline duties of care are addressed.

But it may also include subjective harms, dependent on someone’s own opinion that they have suffered what they regard as harm. When applied to speech, this is highly problematic. One person may enjoy reading a piece of searing prose. Another may be distressed. How is harm, or the risk of harm, to be determined when different people react in different ways to what they are reading or hearing? Is distress enough to render something harmful? What about mild upset, or moderate annoyance? Does offensiveness inflict harm? At its most fundamental, is speech violence? 

‘Harm’ as such has no identifiable boundaries, at least none that would pass a legislative certainty test.

This is particularly evident in the White Paper’s discussion of Disinformation. In the context of anti-vaccination the White Paper notes that “Inaccurate information, regardless of intent, can be harmful”.

Having equated inaccuracy with harm, the White Paper contradictorily claims that the regulator and its online intermediary proxies can protect users from harm without policing truth or accuracy:

“We are clear that the regulator will not be responsible for policing truth and accuracy online.” [36] 
“Importantly, the code of practice that addresses disinformation will ensure the focus is on protecting users from harm, not judging what is true or not.” [7.31]
The White Paper acknowledges that:
“There will be difficult judgement calls associated with this. The government and the future regulator will engage extensively with civil society, industry and other groups to ensure action is as effective as possible, and does not detract from freedom of speech online” [7.31]
The contradiction is not something that can be cured by getting some interested parties around a table. It is the cleft stick into which a proposal of this kind inevitably wedges itself, and from which there is no escape.

A third variety of harm, yet more nebulous, can be put under the heading of ‘harm to society’. This kind of harm does not depend on identifying an individual who might be directly harmed. It tends towards pure abstraction, malleable at the will of the interpreting authority.

Harms to society feature heavily in the White Paper, for example: content or activity that:

“threatens our way of life in the UK, either by undermining national security, or by reducing trust and undermining our shared rights, responsibilities and opportunities to foster integration.”
Similarly:
“undermine our democratic values and debate”;

“encouraging us to make decisions that could damage our health, undermining our respect and tolerance for each other and confusing our understanding of what is happening in the wider world.”
This kind of prose may befit the soapbox or an election manifesto, but has no place in or near legislation.

Democratic deficit

One particular concern is the potential for a duty of care supervised by a regulator and based on a malleable notion of harm to be used as a mechanism to give effect to some Ministerial policy of the day, without the need to obtain legislation.

Thus, two weeks before the release of the White Paper Health Secretary Matt Hancock suggested that anti-vaxxers could be targeted via the forthcoming duty of care.

The White Paper duly recorded, under “Threats to our way of life”, that “Inaccurate information, regardless of intent, can be harmful – for example the spread of inaccurate anti-vaccination messaging online poses a risk to public health.” [1.23]

If a Secretary of State decides that he wants to silence anti-vaxxers, the right way to go about it is to present a Bill to Parliament, have it debated and, if Parliament agrees, pass it into law. The structure envisaged by the White Paper would create a channel whereby an ad hoc Ministerial policy to silence a particular group or kind of speech could be framed as combating an online harm, pushed to the regulator then implemented by its online intermediary proxies. Such a scheme has democratic deficit hard baked into it.

Perhaps in recognition of this, the government is consulting on whether Parliament should play a role in developing or approving Ofweb’s Codes of Practice. That, however, smacks more of sticking plaster than cure.

Impermissible vagueness

Building a regulatory structure on a non-specific notion of harm is not a matter of mere ambiguity, where some word in an otherwise unimpeachable statute might mean one thing or another and the court has to decide which it is. It strays beyond ambiguity into vagueness and gives rise to rule of law issues.

The problem with vagueness was spelt out by the House of Lords in R v Rimmington, citing the US case of Grayned:

"Vagueness offends several important values … A vague law impermissibly delegates basic policy matters to policemen, judges and juries for resolution on an ad hoc and subjective basis, with the attendant dangers of arbitrary and discriminatory application."
Whilst most often applied to criminal liability, the objection to vagueness is more fundamental than that. It is a constitutional principle that applies to the law generally. Lord Diplock referred to it in a 1975 civil case (Black-Clawson):
"The acceptance of the rule of law as a constitutional principle requires that a citizen, before committing himself to any course of action, should be able to know in advance what are the legal consequences that will flow from it."
Certainty is a particular concern with a law that has consequences for individuals' speech. In the context of a social media duty of care the rule of law requires that users must be able to know with reasonable certainty in advance what of their speech is liable to be the subject of preventive or mitigating action by a platform operator subject to the duty of care.

If the duty of care is based on an impermissibly vague concept such as ‘harm’, then the legislation has a rule of law problem. It is not necessarily cured by empowering the regulator to clothe the skeleton with codes of practice and interpretations, for three reasons: 

First, impermissibly vague legislation does not provide a skeleton at all – more of a canvas on to which the regulator can paint at will; 

Second, if it is objectionable for the legislature to delegate basic policy matters to policemen, judges and juries it is unclear why it is any less objectionable to do so to a regulator; 

Third, regulator-made law is a moveable feast.

All power to the sheriff

From a rule of law perspective undefined harm ought not to take centre stage in legislation.

However if the very idea is to maximise the power and discretion of a regulator, then inherent vagueness in the legislation serves the purpose very well. The vaguer the remit, the more power is handed to the regulator to devise policy and make law.

John Humphrys, perhaps unwittingly, put his finger on it during the Today programme on 8 April 2019 
(4:00 onwards). Joy Hyvarinen of Index on Censorship pointed out how broadly Ofcom had interpreted harm in its 2018 survey, to which John Humphrys retorted: “You deal with that by defining [harm] more specifically, surely". 

That would indeed be an improvement. But what interest would a government intent on creating a powerful regulator, not restricted to a static list of in-scope content and behaviour, have in cramping the regulator’s style with strict rules and carefully limited definitions of harm? In this scheme of things breadth and vagueness are not faults but a means to an end.

There is a precedent for this kind of approach in broadcast regulation. The Communications Act 2003 refers to 'offensive and harmful', makes no attempt to define them and leaves it to Ofcom to decide what they mean. Ofcom is charged with achieving the objective: 
“that generally accepted standards are applied to the contents of television and radio services so as to provide adequate protection for members of the public from the inclusion in such services of offensive and harmful material”.
William Perrin and Professor Lorna Woods, whose work on duties of care has influenced the White Paper, say of the 2003 Act that: 
"competent regulators have had little difficulty in working out what harm means" [37]. 
They endorse Baroness Grender’s contribution to a House of Lords debate in November 2018, in which she asked: 
"Why did we understand what we meant by "harm" in 2003 but appear to ask what it is today?"
The answer is that in 2003 the legislators did not have to understand what the vague term 'harm' meant because they gave Ofcom the power to decide. It is no surprise if Ofcom has had little difficulty, since it is in reality not 'working out what harm means' but deciding on its own meanings. It is, in effect, performing a delegated legislative function.

Ofweb would be in the same position, effectively exercising a delegated power to decide what is and is not harmful.

Broadcast regulation is an exception from the norm that speech is governed only by the general law. Because of its origins in spectrum scarcity and the perceived power of the medium, it has been considered acceptable to impose stricter content rules and a discretionary style of regulation on broadcast, in addition to the general laws (defamation, obscenity and so on) that apply to all speech.

That does not, however, mean that a similar approach is appropriate for individual speech. Vagueness goes hand in hand with arbitrary exercise of power. If this government had set out to build a scaffold from which to hang individual online speech, it could hardly have done better.

The duty of care that isn’t

Lastly, it is notable that as far as can be discerned from the White Paper the proposed duty of care is not really a duty of care at all.

A duty of care properly so called is a legal duty owed to identifiable persons. They can claim damages if they suffer injury caused by a breach of the duty. Common law negligence and liability under the Occupiers’ Liability Act 1957 are examples. These are typically limited to personal injury and damage to physical property; and only rarely impose a duty on, say, an occupier, to prevent visitors injuring each other. An occupier owes no duty in respect of what visitors to the property say to each other.

The absence in the White Paper of any nexus between the duty of care and individual persons would allow Ofweb’s remit to be extended beyond injury to individuals and into the nebulous realm of harms to society. That, as discussed above, is what the White Paper proposes.

Occasionally a statute creates something that it calls a duty of care, but which in reality describes a duty owed to no-one in particular, breach of which is (for instance) a criminal offence.

An example is s.34 of the Environmental Protection Act 1990, which creates a statutory duty in respect of waste disposal. As would be expected of such a statute, s.34 is precise about the conduct that is in scope of the duty. In contrast, the White Paper proposes what is in effect a universal online ‘Behaving Badly’ law.

Even though the Secretary of State referred in a recent letter to the Society of Editors to “A duty of care between companies and their users”, the ‘duty of care’ described in the White Paper is something quite different from a duty of care properly so called.

The White Paper’s duty of care is a label applied to a regulatory framework that would give Ofweb discretion to decide what user communications and activities on the internet should be deemed harmful, and the power to enlist proxies such as social media companies to sniff and snuff them out, and to take action against an in scope company if it does not comply.

This is a mechanism for control of individual speech such as would not be contemplated offline and is fundamentally unsuited to what individuals do and say online.


[Added visualisation 24 May 2019. Amended 19 July 2019 to make clear that the regulator might be an existing or new body - see Consultation Q.10. Grammatical error corrected 1 March 2023.]