Tuesday 24 January 2023

Positive light or fog in the Channel?

If anything graphically illustrates the perilous waters into which we venture when we require online intermediaries to pass judgment on the legality of user-generated content, it is the government’s decision to add S.24 of the Immigration Act 1971 to the Online Safety Bill’s list of “priority illegal content”: user content that platforms must detect and remove proactively, not just by reacting to notifications. Proactive measures could involve scouring the platform for content already uploaded, filtering and blocking at the point of attempted upload, or both.

The political target of the Bill amendment, which the government says it will introduce in the House of Lords, is videos of migrants crossing the Channel in boats. The Secretary of State explained it thus:
“We will also add Section 24 of the Immigration Act 1971 to the priority offences list in Schedule 7. Although the offences in Section 24 cannot be carried out online, paragraph 33 of the Schedule states that priority illegal content includes the inchoate offences relating to the offences that are listed. Therefore aiding, abetting, counselling, conspiring etc those offences by posting videos of people crossing the channel which show that activity in a positive light could be an offence that is committed online and therefore falls within what is priority illegal content. The result of this amendment would therefore be that platforms would have to proactively remove that content.”
We have to assume that this wheeze was dreamed up in some haste, meeting the immediate political imperative to respond to a strongly supported back bench amendment that tried to tack videos of boat crossings on to the Bill’s children’s duties. Now that the dust has settled, at least temporarily, let us take a look at what would be involved in applying the government's proposal.

In view of some of the media commentary, it is worth emphasising that the proposed amendment to the Bill would not create a new offence. It is based on existing accessory liability legislation, which platforms (and indeed search engines) would have to apply proactively.

In a positive light

Where does ‘in a positive light’ come from? Presumably the Secretary of State must have had in mind that if a video shows the activity of crossing the Channel to gain illegal entry to the UK in a negative light – thus tending to deter the activity - that cannot amount to counselling (in modern language, encouraging) an offence of entering (or attempting to enter) the UK illegally. So far so good. But that does not mean we should jump to the conclusion that ‘in a positive light’ is sufficient to amount to encouragement.

The offence of aiding, abetting, counselling etc a Section 24 offence applies not only to videos but to any kind of communication, whether on social media, simple discussion forums, websites or elsewhere.

You do not have to go far to find studies suggesting that illegal immigration can have positive benefits to an economy. Does supporting that position in an online discussion about UK immigration put the activity of illegal entry to the UK in a positive light? Quite possibly. Does it (in the legal sense) encourage an offence of illegal entry to the UK? Surely not. That is a far cry from intentionally encouraging a prospective illegal migrant to commit an illegal entry offence.

The idea that someone might be prosecuted for voicing that kind of opinion in a general online discussion is (one would hope) absurd. It brings to mind the comment of Lord Scott in Rusbridger v Attorney-General, a case about the moribund Section 3 of the Treason Felony Act 1848:
“[Y]ou do not have to be a very good lawyer to know that to advocate the abolition of the monarchy and its replacement by a republic by peaceful and constitutional means will lead neither to prosecution nor to conviction. All you need to be is a lawyer with commonsense.”
In any event legislation must, so far as it is possible to do so, be read and given effect in a way which is compatible with the European Convention on Human Rights right of freedom of expression (S.3 Human Rights Act 1998; albeit the Bill of Rights Bill would repeal that provision).

The Secretary of State’s proposal has reportedly sparked fears among humanitarian organisations of consequences if they share footage that may call into question the policing of Channel crossings. The Home Office, for its part, has said that they would not be penalised. That is an understandable view if all the legal elements of an encouragement offence are properly taken into account.

Nevertheless, it is not so far-fetched a notion that an online platform, tasked by the Online Safety Bill proactively to detect and remove user content that encourages an illegal entry offence, might consider itself duty-bound to remove content that in actual fact would not result in prosecution or a conviction in court. There are specific reasons for this under the Bill, which contrast with prosecution through the courts.

Prosecution versus the Bill's illegality duties

First the platform’s removal duty under the Bill kicks in not if the user’s content is illegal beyond reasonable doubt, or manifestly illegal, but if the platform has ‘reasonable grounds to infer’ illegality – on the face of it a significantly lower standard. Whether this standard is compatible with Article 10 of the European Convention on Human Rights is questionable, but nevertheless it is what the Bill says. The Bill would inevitably require platforms to remove some content that is in fact legal.

Second, the Bill requires platforms to act on all the information reasonably available to the platform: a far more limited factual basis than a court. At least for an automated system that would be likely to be the content of the post and any related information on the platform (such as information indicating the nature and identity of the poster). It excludes any extrinsic contextual information not reasonably available to the platform. 

Further, the platform can take into account the possibility of a defence only if it has reasonable grounds to infer that one may successfully be relied upon. For many defences (such as reasonable excuse) any grounds for a defence will not necessarily be apparent from the information available to the platform, in which case the possibility of a defence must be ignored. 

The platform’s assessment of illegality may thus depend on the happenstance of whether there is anything in the post itself, or its surrounding data, that points to the possibility of a successful defence. For some widely drawn offences intent and available defences are the most significant elements in determining legality, and are integral to the balance drawn by the legislature. This, we shall see, is of particular relevance to the encouragement and assistance offences under the Serious Crime Act 2007.

Third, the task of a platform is not to second-guess whether the authorities would prosecute, but to decide whether it has reasonable grounds to infer that the content falls within the letter of the law. Whilst the Bill makes numerous references to proportionality, that does not affect the basis on which the platform must determine illegality. That is a binary, yes or no assessment. There is no obvious room for a platform to conclude that something is only a little bit illegal, or to decide that, once detected, some content crossing the ‘reasonable grounds to infer’ threshold could be left up. Certainly the political expectation is that any detected illegal content will be removed.

If that is right, the assessment that platforms are required to make under the Bill lacks the anything akin to the ameliorating effect of prosecutorial discretion on the rough edges of the criminal law. Conversely to build such discretion, even principles-based, into the decision-making required of platforms would hardly be a solution either, especially not at the scale and speed implied by automated proactive detection and removal obligations. We do not want platforms to be arbiters of truth, but to ask them (or their automated systems) to be judges of the public interest or of the seriousness of offending would be a recipe for guesswork and arbitrariness, even under the guidance of Ofcom.

If this seems like a double bind, it is. It reflects a fundamental flaw in the Bill’s duty of care approach: the criminal law was designed to be operated within the context of the procedural protections provided by the legal system, and to be adjudged by courts on established facts after due deliberation; not to be the subject of summary justice dispensed on the basis of incomplete information by platforms and their automated systems tasked with undertaking proactive detection.

Fourth, we shall see that in some cases the task required of the platform appears to involve projection into the future on hypothetical facts. Courts are loath to assess future criminal illegality on a hypothetical basis. Their task at trial is to determine whether the events that are proved in fact to have occurred amounted to an offence.

Fifth, inaccuracy. False positives are inevitable with any moderation system - all the more so if automated filtering systems are deployed and are required to act on incomplete information (albeit Ofcom is constrained to some extent by considerations of accuracy, effectiveness and lack of bias in its ability to recommend proactive technology in its Codes of Practice). Moreover, since the dividing line drawn by the Bill is not actual illegality but reasonable grounds to infer illegality, the Bill necessarily deems some false positives to be true positives.

Sixth, the involvement of Ofcom. The platform would have the assistance of a Code of Practice issued by Ofcom. That would no doubt include a section describing the law on encouragement and assistance in the context of the S.24 1971 Act illegal entry offences, and would attempt to draw some lines to guide the platform’s decisions about whether it had reasonable grounds to infer illegality.

An Ofcom Code of Practice would carry substantial legal and practical weight. That is because the Bill provides that taking the measures recommended in a Code of Practice is deemed to fulfil the platform’s duties under the Bill. Much would therefore rest on Ofcom’s view of the law of encouragement and assistance and what would constitute reasonable grounds to draw an inference of illegality in various factual scenarios.

Seventh, the involvement of the Secretary of State. Ofcom might consider whether to adopt the Secretary of State’s ‘in a positive light’ interpretation. As the Bill currently stands, if the Secretary of State did not approve of Ofcom’s recommendation for public policy reasons s/he could send the draft Code of Practice back to Ofcom to with a direction to modify – and, it seems, keep on doing so until s/he was happy with its contents.

Even if that controversial power of direction were removed from the Bill, Ofcom would still have significant day to day power to adopt interpretations of the law and apply them to platforms’ decision-making (albeit Ofcom’s interpretations would in principle be open to challenge by judicial review).

As against those seven points, in fulfilling its duties under the Bill a platform is required to have particular regard to the importance of protecting users’ right to freedom of expression within the law. ‘Within the law’ might suggest that the duty has minimal relevance to the illegality duties, especially when clause 170 sets out expressly how platforms are to determine illegality. It provides that if the reasonable grounds to infer test is satisfied, the platform must treat the content as illegal.

The government’s ECHR Memorandum suggests that the ‘have particular regard’ duty may have some effect on illegality determination, but it does not explain how it does so in the face of the express provisions of clause 170. It also inaccurately paraphrases clause 18 by omitting ‘within the law’:
“34. Under clause 18, all in-scope service providers are required to have regard to the importance of protecting freedom of expression when deciding on and implementing their safety policies and procedures. This will include assessments as to whether content is illegal or of a certain type and how to fulfil its duties in relation to such content. Clause 170 makes clear that providers are not required to treat content as illegal content (i.e. to remove it from their service) unless they have reasonable grounds to infer that all elements of a relevant offence are made out. They must make that inference on the basis of all relevant information reasonably available to them.”
That is all by way of lengthy preliminary. Now let us delve into how a platform might be required to go about assessing the legality of a Channel dinghy video under the Accessories and Abettors Act 1861, then for the companion encouragement and assistance offences under the Serious Crime Act 2007.

Let us assume that the Secretary of State is right: that posting a video of people crossing the Channel in dinghies, which shows that activity in a positive light, can in principle amount to encouraging an illegal entry offence. In the interests of simplicity, I will ignore the Secretary of State’s reference to conspiracy. How should a platform go about determining illegality?

Spoiler alert: the process is more complicated and difficult than the Secretary of State’s pronouncement might suggest. And in case anyone is inclined to charge me with excessive legal pedantry, let us not forget that the task that the Bill expressly requires a platform to undertake is to apply the rules laid down in the Bill and in the relevant underlying offences. The task is not to take a rough and ready ‘that looks a bit dodgy, take it down’, or ‘the Home Secretary has complained about this content so we’d better remove it’ approach. Whether what the Bill requires is at all realistic is another matter.

Aiding, abetting and counselling – the 1861 Act

Aiding, abetting and counselling (the words used by the Secretary of State) is the language of the 1861 Act: “Whosever shall aid, abet, counsel or procure the commission of any indictable offence … shall be liable to be tried, indicted and punished as a principal offender.”

One of the most significant features of accessory liability under the 1861 Act is that there can be no liability for aiding, abetting, counselling or procuring unless and until the principal offence has actually occurred. Whilst the aiding, abetting etc does not have to cause the principal offence that occurred, there has to be some connecting link with it. As Toulson LJ put it in Stringer:
“Whereas the provision of assistance need not involve communication between D and P, encouragement by its nature involves some form of transmission of the encouragement by words or conduct, whether directly or via an intermediary. An un-posted letter of encouragement would not be encouragement unless P chanced to discover it and read it. Similarly, it would be unreal to regard P as acting with the assistance or encouragement of D if the only encouragement took the form of words spoken by D out of P's earshot.”
Timing This gives rise to a timing problem for a platform tasked with assessing whether a video is illegal. For illegality to arise under the 1861 Act the video must in fact have been viewed by someone contemplating an illegal entry offence, the video would have to have encouraged them to enter the UK illegally, and they would have to have proceeded to do so (or attempt to do so).

Absent those factual events having taken place, there can be no offence of aiding and abetting. The aiding and abetting offence would further require the person posting the video to have intended the person contemplating illegal entry to view the video and to have intended to encourage their actual subsequent actual or attempted illegal entry.

Thus if a platform is assessing a video that is present on the platform, in order to adjudge the video to be illegal it would at a minimum have to consider how long it has been present on the platform. That is because there must be reasonable grounds to infer both that a prospective migrant has viewed it and that since doing so that person has already either entered the UK illegally or attempted to do so. Otherwise no principal offence has yet occurred and so no offence of aiding and abetting the principal offence can have been committed by posting the video.

It may in any case be a nice question whether, in the absence of any evidence available to the platform that a prospective migrant has in fact viewed the video, the platform would have reasonable grounds to infer the existence of any of these facts. To do so would appear to involve making an assumption of someone viewing the video and of a connected illegal entry offence that the assumed viewing has in fact encouraged. 

For a post blocked by filtering at the point of upload (if that were considered feasible) the timing issue becomes a conundrum. Since no-one can have viewed a blocked video, none of the required subsequent events can possibly have occurred. Nor does the law provide any offence of attempting to aid and abet a 1971 Act offence.

Thus at least for upload filtering it appears that either there is a conceptual bar to a platform determining that a video blocked at the point of upload amounts to aiding abetting, or the platform would (if the Bill permits it) have to engage in some legal time travel and assess illegality on a hypothetical future basis.

A basis on which a platform could be required to assess such hypothetical illegality may be provided by Clause 53(14)(b) of the Bill, which in effect provides that illegal content includes content that would be illegal if it were present on the platform. 

Even then, a video present on the platform only as a legal fiction cannot as a matter of fact be connected to any subsequent actual encouraged primary offence. Deemed presence would therefore have to be notionally extended for a sufficient period to hypothesise the factual events necessary for completion of the aiding and abetting offence: that a notional prospective migrant has hypothetically viewed the video present on the service, hypothetically been encouraged by the video to commit or attempt an illegal entry offence, and hypothetically then done so.

Even if any of this hypothesising is permissible under the Bill, whether it could provide reasonable grounds to infer illegality is a matter for conjecture. The need to hypothesise the existence of an actual illegal entry offence would never arise in a prosecution in court, since for a prosecution of the accessory to succeed it must be proved that the principal offence has taken place. In court, therefore, the assessment of accessory liability will always be within the context of a known past set of facts that are proved to have amounted to an offence by a principal.

Intent The platform would also have to consider whether it has reasonable grounds to infer that the poster had the necessary intention to aid, abet etc the actual or attempted offence.

In court the prosecution would have to prove, beyond reasonable doubt, that the poster intended a viewer of the video to obtain or attempt illegal entry to the UK, the poster having knowledge of the facts that would and did render the principal’s conduct criminal. (‘Did’, because there can be no conviction for aiding and abetting unless the principal offence is proved to have taken place.)

That would raise the question of whether generalised knowledge of the existence of people crossing the Channel who might view the video and be encouraged by it would be sufficient to satisfy the knowledge requirement, when the poster would have been unaware of the particular individual who had in fact viewed the video and then committed the offence. Whilst it might be legitimate to find intent where the video is specifically promoting illegal crossings to prospective migrants, such a finding would seem to be highly debatable if the video did not offer targeted encouragement, even if it portrayed such activities in a positive light.

How should a platform decide whether the poster of the video had the requisite intent to constitute an aiding and abetting offence? The Bill requires the platform to apply the ‘reasonable grounds to infer’ test. It has to make that assessment on the basis of all the information reasonably available to it. That would likely bring in to account not only the content of the video, but any surrounding text in the post and (if apparent) the nature of the person posting. The intent of a video advertising illegal Channel crossings might be clear, the intent of a bare clip of a dinghy carrying migrants (even if it showed smiling occupants and was accompanied by upbeat music) not so much.

Serious Crime Act 2007 – encouraging and assisting

We started by considering aiding and abetting under the 1861 Act because that is what the language used by the Secretary of State appeared to allude to. That is not, however, the end of the story. The Serious Crime Act 2007 enacted encouragement and assistance offences that, unlike aiding and abetting, do not depend on the principal offence actually taking place. They therefore avoid the time travel and hypothesising contortions involved in applying the Bill to the 1861 Act.

Also unlike aiding and abetting, an attempt to commit an encouragement or assistance offence under the 2007 Act is itself an offence. In principle therefore, a foiled attempt to upload a video capable of constituting an encouragement or assistance offence under the 2007 Act could itself constitute an offence.

By way of illustration, consider the simplest 2007 Act offence, S.44:
“(1) A person commits an offence if—

(a) he does an act capable of encouraging or assisting the commission of an offence; and

(b) he intends to encourage or assist its commission.

(2) But he is not to be taken to have intended to encourage or assist the commission of an offence merely because such encouragement or assistance was a foreseeable consequence of his act.”
So a platform tasked with adjudging whether the video is illegal would have to consider not only whether posting the video is ‘capable’ of encouraging the commission of an unlawful entry offence, but also whether the person who posted it intended to encourage the commission of the offence; bearing in mind that a mere foreseeable consequence does not count as intent. (That, it might be thought, rules out any but the most targeted advertising or promotional videos.)

How should a platform go about these two tasks? As with the 1861 Act aiding and abetting offences, part of the answer lies in Clause 170 of the Bill, which specifies the standard of ‘reasonable grounds to infer’ based on ‘all information reasonably available’ to the platform.

The analysis would be based on the same information as for aiding and abetting, but without the need to show (or hypothesise) that anyone actually viewed or acted upon the video. It is enough if publication of the video is capable of encouraging the offence. However, the express exclusion of a merely foreseeable consequence would limit the inference of intention that it is reasonable for the platform to draw.

Defence of reasonable conduct Unlike for the 1861 Act aiding and abetting offence, the 2007 Act offences provide a defence of ‘reasonable conduct’. This comes in two different versions:

(1) that the defendant knew that certain circumstances existed and that it was reasonable for him to act as he did in those circumstances; or

(2) that he believed certain circumstances to exist, that his belief was reasonable, and that it was reasonable for him to act as he did in the circumstances as he believed them to be.

Factors that the 2007 Act states have to be considered in relation to reasonableness include the seriousness of the offence and any purpose for which the defendant claims to have been acting. A 2007 Act defence will succeed in court if the defendant proves it on the balance of probabilities.

The information on which the possibility of a reasonableness defence depends may well be extrinsic to the platform or its automated systems. The purpose for which a user has acted is something within the user’s knowledge and belief and may not be apparent from the post itself.

As already mentioned, this is significant because the platform cannot consider the possibility of a defence unless, on the basis of all relevant information that is reasonably available to it, it has reasonable grounds to infer that a defence may be successfully relied upon (in the context of the 2007 Act defence: successful on the balance of probabilities).

In determining what information is reasonably available to the provider, the following factors, in particular, are relevant: (a) the size and capacity of the provider, and (b) whether a judgement is made by human moderators, by means of automated systems or processes or by means of automated systems or processes together with human moderators.

The probable net result, for an automated system, is that the possibility of a defence is to be ignored unless it is apparent from the information processed by the system. Yet for the 2007 Act encouragement and assistance offences, the defences are an integral element of the offence, designed to balance the potentially overreaching effects of inchoate liability founded on mere capability.

In reality, however, it smacks of fantasy to imagine that a platform, whether employing automated systems, human moderators, or a combination of the two, would be capable of applying rules of this nuance and complexity, particularly in real or near real time.

The broader issue

These problems with the Bill’s illegality duties are not restricted to migrant boat videos or immigration offences, although the Secretary of State’s statement has provided an unexpected opportunity to illustrate them. They are of general application and are symptomatic of a flawed assumption at the heart of the Bill: that it is a simple matter to ascertain illegality just by looking at what the user has posted. There will be some offences for which this is possible (child abuse images being the most obvious), and other instances where the intent of the poster is clear. But for the most part that will not be the case, and the task required of platforms will inevitably descend into guesswork and arbitrariness: to the detriment of users and their right of freedom of expression.

It is strongly arguable that if an illegality duty is to be placed on platforms at all, the threshold for illegality assessment should not be ‘reasonable grounds to infer’, but clearly or manifestly illegal. Indeed, that may be what compatibility with the Article 10 right of freedom of expression requires.


No comments:

Post a Comment

Note: only a member of this blog may post a comment.