Saturday 30 July 2022

Platforms adjudging illegality – the Online Safety Bill’s inference engine

The Online Safety Bill, before the pause button was pressed, enjoyed a single day’s Commons Report stage debate on 12 July 2022.  Several government amendments were passed and incorporated into the Bill.

One of the most interesting additions is New Clause 14 (NC14), which stipulates how user-to-user providers and search engines should decide whether user content constitutes a criminal offence. This was previously an under-addressed but nevertheless deep-seated problem for the Bill’s illegality duty. 

One underlying issue is that (especially for real-time proactive filtering) providers are placed in the position of having to make illegality decisions on the basis of a relative paucity of information, often using automated technology. That tends to lead to arbitrary decision-making.

Moreover, if the threshold for determining illegality is set low, large scale over-removal of legal content will be baked into providers’ removal obligations. But if the threshold is set high enough to avoid over-removal, much actually illegal content may escape. Such are the perils of requiring online intermediaries to act as detective, judge and bailiff.

NC 14 looks like a response to concerns raised in April 2022 by the Independent Reviewer of Terrorism Legislation over how a service provider’s illegality duty would apply to terrorism offences, for which (typically) the scope of acts constituting an offence is extremely broad. The most significant limits on the offences are set by intention and available defences – neither of which may be apparent to the service provider. As the Independent Reviewer put it:

“Intention, and the absence of any defence, lie at the heart of terrorism offending”.

He gave five examples of unexceptional online behaviour, ranging from uploading a photo of Buckingham Palace to soliciting funds on the internet, which if intention or lack of a defence were simply assumed, would be caught by the illegality duty. He noted:

“It cannot be the case that where content is published etc. which might result in a terrorist offence being committed, it should be assumed that the mental element is present, and that no defence is available. Otherwise, much lawful content would “amount to” a terrorist offence.”

If, he suggested, the intention of the Bill was that inferences about mental element and lack of defence should be drawn, then the Bill ought to identify a threshold. But if the intention was to set the bar at ‘realistic to infer’, that:

“does not allow sufficiently for freedom of speech. It may be “realistic” but wholly inaccurate to infer terrorist intent in the following words: “I encourage my people to shoot the invaders””

Issues of this kind are not confined to terrorism offences. There will be other offences for which context is significant, or where a significant component of the task of keeping the offence within proper bounds is performed by intention and defences.

Somewhat paraphrased, the answers provided to service providers by NC 14 are:

  • Base your illegality judgement on whatever relevant information you or your automated system have reasonably available.
  • If you have reasonable grounds to infer that all elements of the offence (including intention) are present, that is sufficient unless you have reasonable grounds to infer that a defence may be successful.
  • If an item of content surmounts the reasonable grounds threshold, and you do not have reasonable grounds to infer a defence, then you must treat it as illegal.

Factors relevant to reasonable availability of content include the size and capacity of the service provider, and whether a judgement is made by human moderators, automated systems or processes, or a combination of both. (The illegality duty applies not just to large social media operators but to all 25,000 providers within the scope of the Bill.)

Ofcom will be required to provide guidance to service providers about making illegality judgements.

What does this mean for users? It is users, let us not forget, whose freedom of expression rights are at risk of being interfered with as a result of the illegality removal duty imposed on service providers. The duty can be characterised as a form of prior restraint.

The first significant point concerns unavailability to the provider of otherwise potentially relevant contextual information. If, from information reasonably available to the provider (which at least for automated systems may be only the content of the posts themselves and, perhaps, related posts), it appears that there are reasonable grounds to infer that an offence has been committed, that is enough. At least for automated real-time systems, the possibility that extrinsic information might put the post in a different light appears to be excluded from consideration, unless its existence and content can be inferred from the posts themselves.

Alternatively, for offences that are highly dependent on context (including extrinsic context) would there be a point at which a provider could (or should) conclude that there is too little information available to support a determination of reasonable grounds to infer?

Second, the difference between elements of the offence itself and an available defence may be significant. The possibility of a defence is to be ignored unless the provider has some basis in the information reasonably available to it on which to infer that a defence may be successful.

Take ‘reasonable excuse’. For the new harmful communications offence that the Bill would enact, lack of reasonable excuse is an element of the offence, not a defence. A provider could not conclude that the user’s post was illegal unless it had reasonable grounds to infer (on the basis of the information reasonably available to it) that there was no reasonable excuse.

By contrast, two offences under the Terrorism Act 200o, S.58 (collection of information likely to be of use to a terrorist) and 58A (publishing information about members of the armed forces etc likely to be of use to a terrorist)) provide for a reasonable excuse defence.

The possibility of such a defence is to be ignored unless the provider has reasonable grounds (on the basis of the information reasonably available to it) to infer that a defence may be successful.

The difference is potentially significant when we consider that (for instance) journalism or academic research constitutes a defence of reasonable excuse under S.58. Unless the material reasonably available to the provider (or its automated system) provides a basis on which to infer that journalism or academic research is the purpose of the act, the possibility of a journalism or academic research defence is to be ignored. (If, hypothetically, the offence had been drafted similarly to the harmful communications offence, so that lack of reasonable excuse was an element of the offence, then in order to adjudge the post as illegal the provider would have had to have reasonable grounds to infer that the purpose was not (for instance) journalism or academic research.)

NC14 was debated at Report Stage. The Opposition spokesperson, Alex Davies-Jones, said:

“The new clause is deeply problematic, and is likely to reduce significantly the amount of illegal content and fraudulent advertising that is correctly identified and acted on.”

“First, companies will be expected to determine whether content is illegal or fraudulently based on information that is “reasonably available to a provider”, with reasonableness determined in part by the size and capacity of the provider. That entrenches the problems I have outlined with smaller, high-risk companies being subject to fewer duties despite the acute risks they pose. Having less onerous applications of the illegal safety duties will encourage malign actors to migrate illegal activity on to smaller sites that have less pronounced regulatory expectations placed on them.”

If information ‘reasonably available to a provider” is insufficiently stringent, what information should a provider be required to base its decision upon? Should it guess at information that it does not have, or make assumptions (which would trespass into arbitrariness)?

In truth it is not so much NC14 itself that is deeply problematic, but the underlying assumption (which NC14 has now exposed) that service providers are necessarily in a position to determine illegality of user content, especially where real time automated filtering systems are concerned.

Alex Davies-Jones went on:

“That significantly raises the threshold at which companies are likely to determine that content is illegal.”

We might fairly ask: raises the threshold compared with what? The draft Bill defined illegal user content as “content where the service provider has reasonable grounds to believe that use or dissemination of the content amounts to a relevant criminal offence.” That standard (which would inevitably have resulted in over-removal) was dropped from the Bill as introduced into Parliament, leaving it unclear what standard service providers were to apply.

The new Online Safety Minister (Damian Collins) said:

“The concern is that some social media companies, and some users of services, may have sought to interpret the criminal threshold as being based on whether a court of law has found that an offence has been committed, and only then might they act. Actually, we want them to pre-empt that, based on a clear understanding of where the legal threshold is. That is how the regulatory codes work. So it is an attempt, not to weaken the provision but to bring clarity to the companies and the regulator over the application.”

In any event, if the Opposition were under the impression that prior to NC14 the threshold in the Bill was lower than ‘reasonable grounds to infer’, what might that standard be? If service providers were obliged to remove user content on (say) a mere suspicion of possible illegality, does that sufficiently protect legal online speech? Would a standard set that low comply with the UK’s ECHR obligations, to which – whatever this government’s view of the ECHR may be - the Opposition is committed? Indeed it is sometimes said that the standard set by the ECHR is manifest illegality.

It bears emphasising that these issues around an illegality duty should have been obvious once an illegality duty of care was in mind: by the time of the April 2019 White Paper, if not before. Yet only now are they being given serious consideration.

It is ironic that in the 12 July Commons debate the most perceptive comments about how service providers are meant to comply with the illegality duty were made by Sir Jeremy Wright QC, the former Culture Secretary who launched the White Paper in April 2019. He said:

“When people first look at this Bill, they will assume that everyone knows what illegal content is and therefore it should be easy to identify and take it down, or take the appropriate action to avoid its promotion. But, as new clause 14 makes clear, what the platform has to do is not just identify content but have reasonable grounds to infer that all elements of an offence, including the mental elements, are present or satisfied, and, indeed, that the platform does not have reasonable grounds to infer that the defence to the offence may be successfully relied upon.

That is right, of course, because criminal offences very often are not committed just by the fact of a piece of content; they may also require an intent, or a particular mental state, and they may require that the individual accused of that offence does not have a proper defence to it.

The question of course is how on earth a platform is supposed to know either of those two things in each case. This is helpful guidance, but the Government will have to think carefully about what further guidance they will need to give—or Ofcom will need to give—in order to help a platform to make those very difficult judgments.”

Why did the government not address this fundamental issue at the start, when a full and proper debate about it could have been had?

This is not the only aspect of the Online Safety Bill that could and should have been fully considered and discussed at the outset. If the Bill ends up being significantly delayed, or even taken back to the drawing board, the government has only itself to blame.



No comments:

Post a Comment

Note: only a member of this blog may post a comment.