Saturday, 11 March 2023

Five lessons from the Loi Avia

In a few months’ time three years will have passed since the French Constitutional Council struck down the core provisions of the Loi Avia - France’s equivalent of the German NetzDG law – for incompatibility with fundamental rights. Although the controversy over the Loi Avia has passed into internet history, the Constitutional Council's decision provides some instructive comparisons when we examine the UK’s Online Safety Bill.

As the Bill awaits its House of Lords Committee debates, this is an opportune moment to cast our minds back to the Loi Avia decision and see what lessons it may hold. Caution is necessary in extrapolating from judgments on fundamental rights, since they are highly fact-specific; and when they do lay down principles they tend to leave cavernous room for future interpretation. Nevertheless, the Loi Avia decision makes uncomfortable reading for some core aspects of the Online Safety Bill.

Background

The key features of the Loi Avia were

  • For illegal CSEA and terrorism content, one hour removal of content notified to an in-scope publisher or host by the administrative authority, on pain of one year’s imprisonment and a 250,000 euro fine.

The Constitutional Council’s objection was founded on the determination of illegality being at the sole discretion of the administrative authority. This provision has no direct parallel in the Online Safety Bill. However, similar considerations could come into play should an Ofcom Code of Practice recommend giving state agencies some kind of trusted flagger status.

  • For content contravening specified hate-related, genocide-related, sexual harassment and child pornography laws, 24-hour removal of manifestly illegal content following notification by any person to an in-scope platform operator, under penalty of a fine of 250,000 euros.

The Online Safety Bill analogue is a reactive ‘swift take down’ duty on becoming aware of in-scope illegal content. Unlike the Loi Avia, the Bill also imposes proactive prevention duties.

The Online Safety Bill imposes duties for both illegal content and legal content harmful to children. Since the Loi Avia concerned only illegal content, the Constitutional Council did not have to consider obligations relating to ‘legal but harmful’ content of any kind, whether for adults or children.

Lesson 1: The rule of law comes first

The tests that the Constitutional Council applied to the Loi Avia – legality, necessity and proportionality – are components of the European Convention on Human Rights, with which the Online Safety Bill must comply.

Along the obstacle course of human rights compatibility, the first hurdle is legality: known in the ECHR as the “prescribed by law” test. In short, a law must have the quality of law to qualify as law. If the law does not enable someone to foresee with reasonable certainty whether their proposed conduct is liable to be affected as a consequence of the law, it falls at that first hurdle. If legislation will result in arbitrary or capricious decisions - for example through vagueness or grant of excessive discretion - it lacks the essential quality of law.

The problem with vagueness was spelt out by the House of Lords in R v Rimmington, citing the US case of Grayned:

"Vagueness offends several important values … A vague law impermissibly delegates basic policy matters to policemen, judges and juries for resolution on an ad hoc and subjective basis, with the attendant dangers of arbitrary and discriminatory application."

Whilst most often applied to criminal liability, the legality objection has also been described as a constitutional principle that underpins the rule of law generally. Lord Diplock referred to it in a 1975 civil case (Black-Clawson):

"The acceptance of the rule of law as a constitutional principle requires that a citizen, before committing himself to any course of action, should be able to know in advance what are the legal consequences that will flow from it."

The French Constitutional Council held that the Loi Avia failed the legality test in one respect. The Loi provided that the intentional element of the offence of failure to remove content notified by any person could arise from absence of a “proportionate and necessary examination of the notified content”. The Constitutional Council found that if this was intended to provide a defence for platform operators, it was not drafted in terms that allowed its scope to be determined. In other words, a defence (if that is what it was) of having carried out a proportionate and necessary examination was too vague to pass the legality test.

The Online Safety Bill differs from the Loi Avia. It does not impose criminal liability on a platform for failure to take down a particular item of user content.  Enforcement by the appointed regulator, Ofcom, is aimed at systematic failures to fulfil duties rather than at individual content decisions. Nevertheless, the Bill is liberally sprinkled with references to proportionality – similar language to that which the French Constitutional Council held was too vague. It typically couches platform and search engine duties as an obligation to use proportionate systems and processes designed to achieve a stipulated result.

It is open to question whether compliance with the legality principle can be achieved simply by inserting ‘proportionate’ into a broadly stated legal duty, instead of grasping the nettle of articulating a more concrete obligation that would enable the proportionality of the interference with fundamental rights to be assessed by a court.

The government’s ECHR Memorandum seeks to head off any objection along these lines by stressing the higher degree of certainty that it expects would be achieved when Ofcom’s Codes of Practice have been laid before Parliament and come into effect. Even if that does the trick, it is another matter whether it is desirable to grant that amount of discretion over individual speech to a regulator such as Ofcom.

For the Online Safety Bill the main relevance of the legality hurdle is to the freedom of expression rights of individual users. Can a user foresee with reasonable certainly whether their proposed communication is liable to be affected as a result of a platform or search engine seeking to fulfil a safety duty imposed by the legislation? The Bill requires those online intermediaries to play detective, judge and bailiff. Interpolation of an online intermediary into the process of adjudging and sanctioning user content is capable of introducing arbitrariness that is not present when the same offence is prosecuted through the courts, with their attendant due process protections.

In the case of the Online Safety Bill, arbitrariness is a real prospect. That is largely because of the kinds of offences on which platforms and search engines are required to adjudicate, the limited information available to them, and the standard to which they have to be satisfied that the user content is illegal.

Lesson 2: Beyond ‘manifestly illegal’

An intriguing feature of the Constitutional Council decision is that although the Loi Avia prescribed, on the face of it, a high threshold for removal of illegal content – manifest illegality  that was not enough to save the legislation from unconstitutionality.  ‘Manifestly illegal’ is a more stringent test than the ‘reasonable grounds to infer’ threshold prescribed by the Online Safety Bill.

The Loi Avia required removal of manifestly illegal user content within 24 hours of receiving from anyone a notification which gave the notifier’s identity, the location of the content, and which specified the legal grounds on which the content was said to be manifestly illegal.

The Constitutional Council observed that the legislation required the operator to examine all content reported to it, however numerous the reports, so as not to risk being penalised. Moreover, once reported the platform had to consider not only the specific grounds on which the content was reported, but all offences within the scope of the legislation – even though some might present legal technicalities or call for an assessment of context. These issues were especially significant in the light of the 24 hour removal deadline and the criminal penalty for each failure to withdraw.

In the Constitutional Council’s view the consequence of these provisions, taking into account also the absence of any clearly specified defence to liability, was that operators could only be encouraged to withdraw content reported to them, whether or not it was manifestly illegal. That was not necessary, appropriate or proportionate and so was unconstitutional.

The Online Safety Bill does not prescribe specific time limits, but requires swift removal of user content upon the platform becoming aware of in-scope illegality. As with the Loi Avia, that applies to all in-scope offences.

The touchstone for assessment of illegality under the Bill is reasonable grounds to infer illegality, on the basis of all information reasonably available to the platform. Unless that threshold is surmounted, the platform does not have to remove it. If it is surmounted, the platform must do so swiftly.

At least in the case of automated proactive monitoring and filtering, the available information will be minimal – the users’ posts themselves and whatever the system knows about the relevant users. As a consequence, the decisions required to be made for many kinds of offence – especially those dependent on context - will inevitably be arbitrary. Moreover, a platform has to ignore the possibility of a defence unless it has something from which it can infer on reasonable grounds that a defence may succeed.

Whilst the Online Safety Bill lacks the Loi Avia’s chilling sword of Damocles of short prescriptive deadlines and automatic criminal liability for failure to remove, the reason why those factors (among others) were legally significant was their effect on the freedom of expression of users: the likely over-removal of lawful user content. The Online Safety Bill’s lower threshold for adjudging illegality, combined with the requirement to make those judgments in a relative information vacuum - often at scale and speed - does more than just encourage takedown of legal user content: it requires it.

Lesson 3 –The lens of prior restraint

The briefly glimpsed elephant in the room of the Loi Avia decision is prior restraint. The Constitutional Council alluded to it when it remarked that the removal obligations were not subject to the prior intervention of a judge or subject to any other condition.

Legislation requiring a platform summarily to adjudge the legality of individual items of user content at speed and at scale bears the hallmarks of prior restraint: removal prior to full adjudication on the merits after argument and evidence.

Prior restraint is not impermissible. It does require the most stringent scrutiny and circumscription, in which the risk of removal of legal content will loom large. The ECtHR in Yildirim considered an interim court order blocking Google Sites.  It characterised that as a prior restraint, and observed: “the dangers inherent in prior restraints are such that they call for the most careful scrutiny on the part of the Court”. 

The ECtHR in Animal Defenders v UK distinguished a prior restraint imposed on an individual act of expression from general measures: in that case a ban on broadcasting political advertising.

If an individual item is removed ultimately pursuant to a general measure, that does not prevent the action being characterised as a prior restraint. If it did, the doctrine could not be applied to courts issuing interim injunctions. The fact that the Online Safety Bill does not penalise a platform for getting an individual decision wrong does not disguise the fact that the required task is to make judgments about individual items of user content constituting individual acts of expression.

The appropriateness of categorising at least proactive detection and filtering obligations as a form of prior restraint is reinforced by the CJEU decision in Poland v The European Parliament and Council, which applied Yildirim to those kinds of provisions in the context of copyright.

Lesson 4 – Context, context, context

The Constitutional Council pointed out the need to assess context for some offences. That is all the more significant for the Online Safety Bill, for several reasons.

First, unlike the Loi Avia the Online Safety Bill imposes proactive, not just reactive, duties. That will multiply the volume of user content to be assessed, in many cases requiring the deployment of automated content monitoring. Such systems, by their very nature, can be aware only of content flowing through the system and not of any external context. 

Second, the Bill requires illegality assessments to be made ignoring external contextual information unless it is reasonably available to the platform.

Third, defences such as reasonableness will often be inherently contextual. The Bill, however, enables the intermediary to take account of the possibility of a defence only if it has information on the basis of which it can infer that a defence may successfully be relied upon.  

Lesson 5 – Proactive duties

The Loi Avia decision was about reactive duties based on notification. Proactive illegality duties present inherently greater human rights challenges. A less prescriptive, less draconian reactive regime, combined with a ‘manifest illegality’ standard and greater due process safeguards, might possibly have survived. But if the starting point is aversion to a regime that encourages takedown of legal user content, it is difficult to see how a regime that carries a certainty of over-takedown, as do the Online Safety Bill’s proactive illegality duties, could pass muster.

What is to be done?

Raising the Online Safety Bill’s standard of assessment from reasonable grounds to infer to manifest illegality would go some way towards a better prospect of human rights compliance. But that still leaves the problem of the assessment having to be made in ignorance of external context; and the problem of the possibility of a defence being discounted unless it is apparent from the information flowing through the system. Those more intractable issues put in question the kinds of offences that platforms and search engines could be called upon to adjudge. 



Tuesday, 24 January 2023

Positive light or fog in the Channel?

If anything graphically illustrates the perilous waters into which we venture when we require online intermediaries to pass judgment on the legality of user-generated content, it is the government’s decision to add S.24 of the Immigration Act 1971 to the Online Safety Bill’s list of “priority illegal content”: user content that platforms must detect and remove proactively, not just by reacting to notifications. Proactive measures could involve scouring the platform for content already uploaded, filtering and blocking at the point of attempted upload, or both.

The political target of the Bill amendment, which the government says it will introduce in the House of Lords, is videos of migrants crossing the Channel in boats. The Secretary of State explained it thus:
“We will also add Section 24 of the Immigration Act 1971 to the priority offences list in Schedule 7. Although the offences in Section 24 cannot be carried out online, paragraph 33 of the Schedule states that priority illegal content includes the inchoate offences relating to the offences that are listed. Therefore aiding, abetting, counselling, conspiring etc those offences by posting videos of people crossing the channel which show that activity in a positive light could be an offence that is committed online and therefore falls within what is priority illegal content. The result of this amendment would therefore be that platforms would have to proactively remove that content.”
We have to assume that this wheeze was dreamed up in some haste, meeting the immediate political imperative to respond to a strongly supported back bench amendment that tried to tack videos of boat crossings on to the Bill’s children’s duties. Now that the dust has settled, at least temporarily, let us take a look at what would be involved in applying the government's proposal.

In view of some of the media commentary, it is worth emphasising that the proposed amendment to the Bill would not create a new offence. It is based on existing accessory liability legislation, which platforms (and indeed search engines) would have to apply proactively.

In a positive light

Where does ‘in a positive light’ come from? Presumably the Secretary of State must have had in mind that if a video shows the activity of crossing the Channel to gain illegal entry to the UK in a negative light – thus tending to deter the activity - that cannot amount to counselling (in modern language, encouraging) an offence of entering (or attempting to enter) the UK illegally. So far so good. But that does not mean we should jump to the conclusion that ‘in a positive light’ is sufficient to amount to encouragement.

The offence of aiding, abetting, counselling etc a Section 24 offence applies not only to videos but to any kind of communication, whether on social media, simple discussion forums, websites or elsewhere.

You do not have to go far to find studies suggesting that illegal immigration can have positive benefits to an economy. Does supporting that position in an online discussion about UK immigration put the activity of illegal entry to the UK in a positive light? Quite possibly. Does it (in the legal sense) encourage an offence of illegal entry to the UK? Surely not. That is a far cry from intentionally encouraging a prospective illegal migrant to commit an illegal entry offence.

The idea that someone might be prosecuted for voicing that kind of opinion in a general online discussion is (one would hope) absurd. It brings to mind the comment of Lord Scott in Rusbridger v Attorney-General, a case about the moribund Section 3 of the Treason Felony Act 1848:
“[Y]ou do not have to be a very good lawyer to know that to advocate the abolition of the monarchy and its replacement by a republic by peaceful and constitutional means will lead neither to prosecution nor to conviction. All you need to be is a lawyer with commonsense.”
In any event legislation must, so far as it is possible to do so, be read and given effect in a way which is compatible with the European Convention on Human Rights right of freedom of expression (S.3 Human Rights Act 1998; albeit the Bill of Rights Bill would repeal that provision).

The Secretary of State’s proposal has reportedly sparked fears among humanitarian organisations of consequences if they share footage that may call into question the policing of Channel crossings. The Home Office, for its part, has said that they would not be penalised. That is an understandable view if all the legal elements of an encouragement offence are properly taken into account.

Nevertheless, it is not so far-fetched a notion that an online platform, tasked by the Online Safety Bill proactively to detect and remove user content that encourages an illegal entry offence, might consider itself duty-bound to remove content that in actual fact would not result in prosecution or a conviction in court. There are specific reasons for this under the Bill, which contrast with prosecution through the courts.

Prosecution versus the Bill's illegality duties

First the platform’s removal duty under the Bill kicks in not if the user’s content is illegal beyond reasonable doubt, or manifestly illegal, but if the platform has ‘reasonable grounds to infer’ illegality – on the face of it a significantly lower standard. Whether this standard is compatible with Article 10 of the European Convention on Human Rights is questionable, but nevertheless it is what the Bill says. The Bill would inevitably require platforms to remove some content that is in fact legal.

Second, the Bill requires platforms to act on all the information reasonably available to the platform: a far more limited factual basis than a court. At least for an automated system that would be likely to be the content of the post and any related information on the platform (such as information indicating the nature and identity of the poster). It excludes any extrinsic contextual information not reasonably available to the platform. 

Further, the platform can take into account the possibility of a defence only if it has reasonable grounds to infer that one may successfully be relied upon. For many defences (such as reasonable excuse) any grounds for a defence will not necessarily be apparent from the information available to the platform, in which case the possibility of a defence must be ignored. 

The platform’s assessment of illegality may thus depend on the happenstance of whether there is anything in the post itself, or its surrounding data, that points to the possibility of a successful defence. For some widely drawn offences intent and available defences are the most significant elements in determining legality, and are integral to the balance drawn by the legislature. This, we shall see, is of particular relevance to the encouragement and assistance offences under the Serious Crime Act 2007.

Third, the task of a platform is not to second-guess whether the authorities would prosecute, but to decide whether it has reasonable grounds to infer that the content falls within the letter of the law. Whilst the Bill makes numerous references to proportionality, that does not affect the basis on which the platform must determine illegality. That is a binary, yes or no assessment. There is no obvious room for a platform to conclude that something is only a little bit illegal, or to decide that, once detected, some content crossing the ‘reasonable grounds to infer’ threshold could be left up. Certainly the political expectation is that any detected illegal content will be removed.

If that is right, the assessment that platforms are required to make under the Bill lacks the anything akin to the ameliorating effect of prosecutorial discretion on the rough edges of the criminal law. Conversely to build such discretion, even principles-based, into the decision-making required of platforms would hardly be a solution either, especially not at the scale and speed implied by automated proactive detection and removal obligations. We do not want platforms to be arbiters of truth, but to ask them (or their automated systems) to be judges of the public interest or of the seriousness of offending would be a recipe for guesswork and arbitrariness, even under the guidance of Ofcom.

If this seems like a double bind, it is. It reflects a fundamental flaw in the Bill’s duty of care approach: the criminal law was designed to be operated within the context of the procedural protections provided by the legal system, and to be adjudged by courts on established facts after due deliberation; not to be the subject of summary justice dispensed on the basis of incomplete information by platforms and their automated systems tasked with undertaking proactive detection.

Fourth, we shall see that in some cases the task required of the platform appears to involve projection into the future on hypothetical facts. Courts are loath to assess future criminal illegality on a hypothetical basis. Their task at trial is to determine whether the events that are proved in fact to have occurred amounted to an offence.

Fifth, inaccuracy. False positives are inevitable with any moderation system - all the more so if automated filtering systems are deployed and are required to act on incomplete information (albeit Ofcom is constrained to some extent by considerations of accuracy, effectiveness and lack of bias in its ability to recommend proactive technology in its Codes of Practice). Moreover, since the dividing line drawn by the Bill is not actual illegality but reasonable grounds to infer illegality, the Bill necessarily deems some false positives to be true positives.

Sixth, the involvement of Ofcom. The platform would have the assistance of a Code of Practice issued by Ofcom. That would no doubt include a section describing the law on encouragement and assistance in the context of the S.24 1971 Act illegal entry offences, and would attempt to draw some lines to guide the platform’s decisions about whether it had reasonable grounds to infer illegality.

An Ofcom Code of Practice would carry substantial legal and practical weight. That is because the Bill provides that taking the measures recommended in a Code of Practice is deemed to fulfil the platform’s duties under the Bill. Much would therefore rest on Ofcom’s view of the law of encouragement and assistance and what would constitute reasonable grounds to draw an inference of illegality in various factual scenarios.

Seventh, the involvement of the Secretary of State. Ofcom might consider whether to adopt the Secretary of State’s ‘in a positive light’ interpretation. As the Bill currently stands, if the Secretary of State did not approve of Ofcom’s recommendation for public policy reasons s/he could send the draft Code of Practice back to Ofcom to with a direction to modify – and, it seems, keep on doing so until s/he was happy with its contents.

Even if that controversial power of direction were removed from the Bill, Ofcom would still have significant day to day power to adopt interpretations of the law and apply them to platforms’ decision-making (albeit Ofcom’s interpretations would in principle be open to challenge by judicial review).

As against those seven points, in fulfilling its duties under the Bill a platform is required to have particular regard to the importance of protecting users’ right to freedom of expression within the law. ‘Within the law’ might suggest that the duty has minimal relevance to the illegality duties, especially when clause 170 sets out expressly how platforms are to determine illegality. It provides that if the reasonable grounds to infer test is satisfied, the platform must treat the content as illegal.

The government’s ECHR Memorandum suggests that the ‘have particular regard’ duty may have some effect on illegality determination, but it does not explain how it does so in the face of the express provisions of clause 170. It also inaccurately paraphrases clause 18 by omitting ‘within the law’:
“34. Under clause 18, all in-scope service providers are required to have regard to the importance of protecting freedom of expression when deciding on and implementing their safety policies and procedures. This will include assessments as to whether content is illegal or of a certain type and how to fulfil its duties in relation to such content. Clause 170 makes clear that providers are not required to treat content as illegal content (i.e. to remove it from their service) unless they have reasonable grounds to infer that all elements of a relevant offence are made out. They must make that inference on the basis of all relevant information reasonably available to them.”
That is all by way of lengthy preliminary. Now let us delve into how a platform might be required to go about assessing the legality of a Channel dinghy video under the Accessories and Abettors Act 1861, then for the companion encouragement and assistance offences under the Serious Crime Act 2007.

Let us assume that the Secretary of State is right: that posting a video of people crossing the Channel in dinghies, which shows that activity in a positive light, can in principle amount to encouraging an illegal entry offence. In the interests of simplicity, I will ignore the Secretary of State’s reference to conspiracy. How should a platform go about determining illegality?

Spoiler alert: the process is more complicated and difficult than the Secretary of State’s pronouncement might suggest. And in case anyone is inclined to charge me with excessive legal pedantry, let us not forget that the task that the Bill expressly requires a platform to undertake is to apply the rules laid down in the Bill and in the relevant underlying offences. The task is not to take a rough and ready ‘that looks a bit dodgy, take it down’, or ‘the Home Secretary has complained about this content so we’d better remove it’ approach. Whether what the Bill requires is at all realistic is another matter.

Aiding, abetting and counselling – the 1861 Act

Aiding, abetting and counselling (the words used by the Secretary of State) is the language of the 1861 Act: “Whosever shall aid, abet, counsel or procure the commission of any indictable offence … shall be liable to be tried, indicted and punished as a principal offender.”

One of the most significant features of accessory liability under the 1861 Act is that there can be no liability for aiding, abetting, counselling or procuring unless and until the principal offence has actually occurred. Whilst the aiding, abetting etc does not have to cause the principal offence that occurred, there has to be some connecting link with it. As Toulson LJ put it in Stringer:
“Whereas the provision of assistance need not involve communication between D and P, encouragement by its nature involves some form of transmission of the encouragement by words or conduct, whether directly or via an intermediary. An un-posted letter of encouragement would not be encouragement unless P chanced to discover it and read it. Similarly, it would be unreal to regard P as acting with the assistance or encouragement of D if the only encouragement took the form of words spoken by D out of P's earshot.”
Timing This gives rise to a timing problem for a platform tasked with assessing whether a video is illegal. For illegality to arise under the 1861 Act the video must in fact have been viewed by someone contemplating an illegal entry offence, the video would have to have encouraged them to enter the UK illegally, and they would have to have proceeded to do so (or attempt to do so).

Absent those factual events having taken place, there can be no offence of aiding and abetting. The aiding and abetting offence would further require the person posting the video to have intended the person contemplating illegal entry to view the video and to have intended to encourage their actual subsequent actual or attempted illegal entry.

Thus if a platform is assessing a video that is present on the platform, in order to adjudge the video to be illegal it would at a minimum have to consider how long it has been present on the platform. That is because there must be reasonable grounds to infer both that a prospective migrant has viewed it and that since doing so that person has already either entered the UK illegally or attempted to do so. Otherwise no principal offence has yet occurred and so no offence of aiding and abetting the principal offence can have been committed by posting the video.

It may in any case be a nice question whether, in the absence of any evidence available to the platform that a prospective migrant has in fact viewed the video, the platform would have reasonable grounds to infer the existence of any of these facts. To do so would appear to involve making an assumption of someone viewing the video and of a connected illegal entry offence that the assumed viewing has in fact encouraged. 

For a post blocked by filtering at the point of upload (if that were considered feasible) the timing issue becomes a conundrum. Since no-one can have viewed a blocked video, none of the required subsequent events can possibly have occurred. Nor does the law provide any offence of attempting to aid and abet a 1971 Act offence.

Thus at least for upload filtering it appears that either there is a conceptual bar to a platform determining that a video blocked at the point of upload amounts to aiding abetting, or the platform would (if the Bill permits it) have to engage in some legal time travel and assess illegality on a hypothetical future basis.

A basis on which a platform could be required to assess such hypothetical illegality may be provided by Clause 53(14)(b) of the Bill, which in effect provides that illegal content includes content that would be illegal if it were present on the platform. 

Even then, a video present on the platform only as a legal fiction cannot as a matter of fact be connected to any subsequent actual encouraged primary offence. Deemed presence would therefore have to be notionally extended for a sufficient period to hypothesise the factual events necessary for completion of the aiding and abetting offence: that a notional prospective migrant has hypothetically viewed the video present on the service, hypothetically been encouraged by the video to commit or attempt an illegal entry offence, and hypothetically then done so.

Even if any of this hypothesising is permissible under the Bill, whether it could provide reasonable grounds to infer illegality is a matter for conjecture. The need to hypothesise the existence of an actual illegal entry offence would never arise in a prosecution in court, since for a prosecution of the accessory to succeed it must be proved that the principal offence has taken place. In court, therefore, the assessment of accessory liability will always be within the context of a known past set of facts that are proved to have amounted to an offence by a principal.

Intent The platform would also have to consider whether it has reasonable grounds to infer that the poster had the necessary intention to aid, abet etc the actual or attempted offence.

In court the prosecution would have to prove, beyond reasonable doubt, that the poster intended a viewer of the video to obtain or attempt illegal entry to the UK, the poster having knowledge of the facts that would and did render the principal’s conduct criminal. (‘Did’, because there can be no conviction for aiding and abetting unless the principal offence is proved to have taken place.)

That would raise the question of whether generalised knowledge of the existence of people crossing the Channel who might view the video and be encouraged by it would be sufficient to satisfy the knowledge requirement, when the poster would have been unaware of the particular individual who had in fact viewed the video and then committed the offence. Whilst it might be legitimate to find intent where the video is specifically promoting illegal crossings to prospective migrants, such a finding would seem to be highly debatable if the video did not offer targeted encouragement, even if it portrayed such activities in a positive light.

How should a platform decide whether the poster of the video had the requisite intent to constitute an aiding and abetting offence? The Bill requires the platform to apply the ‘reasonable grounds to infer’ test. It has to make that assessment on the basis of all the information reasonably available to it. That would likely bring in to account not only the content of the video, but any surrounding text in the post and (if apparent) the nature of the person posting. The intent of a video advertising illegal Channel crossings might be clear, the intent of a bare clip of a dinghy carrying migrants (even if it showed smiling occupants and was accompanied by upbeat music) not so much.

Serious Crime Act 2007 – encouraging and assisting

We started by considering aiding and abetting under the 1861 Act because that is what the language used by the Secretary of State appeared to allude to. That is not, however, the end of the story. The Serious Crime Act 2007 enacted encouragement and assistance offences that, unlike aiding and abetting, do not depend on the principal offence actually taking place. They therefore avoid the time travel and hypothesising contortions involved in applying the Bill to the 1861 Act.

Also unlike aiding and abetting, an attempt to commit an encouragement or assistance offence under the 2007 Act is itself an offence. In principle therefore, a foiled attempt to upload a video capable of constituting an encouragement or assistance offence under the 2007 Act could itself constitute an offence.

By way of illustration, consider the simplest 2007 Act offence, S.44:
“(1) A person commits an offence if—

(a) he does an act capable of encouraging or assisting the commission of an offence; and

(b) he intends to encourage or assist its commission.

(2) But he is not to be taken to have intended to encourage or assist the commission of an offence merely because such encouragement or assistance was a foreseeable consequence of his act.”
So a platform tasked with adjudging whether the video is illegal would have to consider not only whether posting the video is ‘capable’ of encouraging the commission of an unlawful entry offence, but also whether the person who posted it intended to encourage the commission of the offence; bearing in mind that a mere foreseeable consequence does not count as intent. (That, it might be thought, rules out any but the most targeted advertising or promotional videos.)

How should a platform go about these two tasks? As with the 1861 Act aiding and abetting offences, part of the answer lies in Clause 170 of the Bill, which specifies the standard of ‘reasonable grounds to infer’ based on ‘all information reasonably available’ to the platform.

The analysis would be based on the same information as for aiding and abetting, but without the need to show (or hypothesise) that anyone actually viewed or acted upon the video. It is enough if publication of the video is capable of encouraging the offence. However, the express exclusion of a merely foreseeable consequence would limit the inference of intention that it is reasonable for the platform to draw.

Defence of reasonable conduct Unlike for the 1861 Act aiding and abetting offence, the 2007 Act offences provide a defence of ‘reasonable conduct’. This comes in two different versions:

(1) that the defendant knew that certain circumstances existed and that it was reasonable for him to act as he did in those circumstances; or

(2) that he believed certain circumstances to exist, that his belief was reasonable, and that it was reasonable for him to act as he did in the circumstances as he believed them to be.

Factors that the 2007 Act states have to be considered in relation to reasonableness include the seriousness of the offence and any purpose for which the defendant claims to have been acting. A 2007 Act defence will succeed in court if the defendant proves it on the balance of probabilities.

The information on which the possibility of a reasonableness defence depends may well be extrinsic to the platform or its automated systems. The purpose for which a user has acted is something within the user’s knowledge and belief and may not be apparent from the post itself.

As already mentioned, this is significant because the platform cannot consider the possibility of a defence unless, on the basis of all relevant information that is reasonably available to it, it has reasonable grounds to infer that a defence may be successfully relied upon (in the context of the 2007 Act defence: successful on the balance of probabilities).

In determining what information is reasonably available to the provider, the following factors, in particular, are relevant: (a) the size and capacity of the provider, and (b) whether a judgement is made by human moderators, by means of automated systems or processes or by means of automated systems or processes together with human moderators.

The probable net result, for an automated system, is that the possibility of a defence is to be ignored unless it is apparent from the information processed by the system. Yet for the 2007 Act encouragement and assistance offences, the defences are an integral element of the offence, designed to balance the potentially overreaching effects of inchoate liability founded on mere capability.

In reality, however, it smacks of fantasy to imagine that a platform, whether employing automated systems, human moderators, or a combination of the two, would be capable of applying rules of this nuance and complexity, particularly in real or near real time.

The broader issue

These problems with the Bill’s illegality duties are not restricted to migrant boat videos or immigration offences, although the Secretary of State’s statement has provided an unexpected opportunity to illustrate them. They are of general application and are symptomatic of a flawed assumption at the heart of the Bill: that it is a simple matter to ascertain illegality just by looking at what the user has posted. There will be some offences for which this is possible (child abuse images being the most obvious), and other instances where the intent of the poster is clear. But for the most part that will not be the case, and the task required of platforms will inevitably descend into guesswork and arbitrariness: to the detriment of users and their right of freedom of expression.

It is strongly arguable that if an illegality duty is to be placed on platforms at all, the threshold for illegality assessment should not be ‘reasonable grounds to infer’, but clearly or manifestly illegal. Indeed, that may be what compatibility with the Article 10 right of freedom of expression requires.


Friday, 6 January 2023

Twenty questions about the Online Safety Bill

Before Christmas Culture Secretary Michelle Donelan invited members of the public to submit questions about the Online Safety Bill, which she will sit down to answer in the New Year. 

Here are mine. 

1. A volunteer who sets up and operates a Mastodon instance in their spare time appears to be the provider of a user-to-user service. Is that correct?

2. Alice runs a personal blog on a blogging platform and is able to decide which third party comments on her blogposts to accept or reject. Is Alice (subject to any Schedule 1 exemptions) the provider of a user-to-user service in relation to those third party comments?

3. Bob runs a blog on a blogging platform. He has multiple contributors, whom he selects. Is Bob the provider of a user-to-user service in relation to their contributions?

4. Is a collaborative software development platform the provider of a user-to-user service?

5. The exclusion from “regulated user-generated content” extends to comments on comments (Clause 49(6)). But a facility enabling free form ‘comments on comments’ appears to disapply the Sch 1 para 4 limited functionality user-to-user service exemption. Is that correct? If so, what is the rationale for the difference? Would, for example, a newspaper website with functionality that enabled free form ‘comments on comments’ therefore not enjoy exclusion from scope under Sch 1 para 4?

6. Does the Sch 1 para 4 limited functionality exemption apply to goods retailers’ own-product review sections? If so, does it achieve that when it refers only to content and not to the goods themselves?

7. Would a site that enables academics to upload papers, subject to prior review by the site operator, be a user-to-user service? 

8. Cl 204(2)(e) appears to suggest that a multiplayer online game would be a user-to-user service by virtue of player interaction alone, whether or not there is an inter-player chat or similar facility. Is that right?

9. Carol sets up and operates a voluntary online neighbourhood watch forum for her locality. Would Carol be a provider of a user-to-user service? 

10. Dan operates a blockchain node. Would Dan be a provider of a user-to-user service?

11. Grace chairs a public meeting using a video platform. Grace has control over who can join the meeting. Would Grace be a provider of a user-to-user service in relation to that meeting?

12. The threshold that the Bill requires a platform to apply when determining criminal illegality is ‘reasonable grounds to infer’. The criminal standard of proof is ‘beyond reasonable doubt’. Would not the Bill’s lower threshold inevitably require removal (at least for proactive obligations) of content that is in fact legal? For automated real time systems would that not occur at scale?

13. The Bill requires a platform to adjudge illegality on the basis of all relevant information reasonably available to it. Particularly for proactive automated processes, that will be limited to what users have posted to the platform. Yet often, illegality depends crucially on extrinsic contextual information that is not available to the platform. How could the adjudgment required by the Bill thus not be arbitrary?

14. For many offences the question of illegality is likely to revolve mainly around intent and available defences. The Bill requires platforms to assess illegality on the basis that the possibility of a defence is to be taken into account only if the platform has reasonable grounds to infer that a defence may successfully be relied upon. Yet the information from which the possibility of a defence (such as reasonable excuse) might be inferred will very often be extrinsic context that, especially for proactive obligations, is not available to a platform. Would that not inevitably require removal of content that is in fact legal? For automated real time systems would that not occur at scale?

15. The Bill requires platforms to have particular regard to the importance of protecting users’ right to freedom of expression ‘within the law’. Does that modify the express requirements of Clause 170 as to how a platform should assess illegality? If so, how?

16. The government’s European Convention on Human Rights Memorandum contains no discussion of the Bill’s illegality duties as a form of prior restraint. Nor does it address the human rights implications of the ‘reasonable grounds to infer’ clause, which was introduced later. Will the government issue a revised Memorandum?

17. Is it intended that the risks of harm to individuals to be mitigated and managed under Clause 9(2)(c) should be limited to those arising from illegality identified in the illegality risk assessment? If so, how does the Bill achieve that?

18. The Bill contains powers to require private messaging services to use accredited technology to identify CSEA content. It also contains an obligation to report all new detected material to the National Crime Agency. The Explanatory Notes state that services will be required to report all and any available information relating to instances of CSEA, including any that help identify a perpetrator or victim. 

The White Paper noted that “Many children and young people take and share sexual images. Creating, possessing, copying or distributing sexual or indecent images of children and young people under the age of 18 is illegal, including those taken and shared by the subject of the image.” Does this mean that an under-18 consensually taking and sharing an indecent selfie on a private messaging platform would automatically be reported to the National Crime Agency if the image is detected by the platform?

19. What are the estimated familiarisation and compliance costs for an in-scope small business or voluntary user-to-user service? What is the calculation of the estimated costs? 

20. The Law Commission in 2018 stated that the common law public nuisance offence applied to online communications. The statutory replacement in s.78 of the Police, Crime, Sentencing and Courts Act 2022 does so too. Could a platform’s reactive duty under Cl. 9, combined with Cl. 170, require it to determine whether it has reasonable grounds to infer that a user’s post creates a risk of causing serious annoyance to a section of the public?



Tuesday, 13 December 2022

(Some of) what is legal offline is illegal online

From what feels like time immemorial the UK government has paraded its proposed online harms legislation under the banner of ‘What is Illegal Offline is Illegal Online’. As a description of what is now the Online Safety Bill, the slogan is ill-fitting. The Bill contains nothing that extends to online behaviour a criminal offence that was previously limited to offline. 

That is for the simple reason that almost no such offences exist. An exception that proves the rule is the law requiring imprints only on physical election literature, a gap that has been plugged not by the Online Safety Bill but by the Elections Act 2022.  

If the slogan is intended to mean that since what is illegal offline is illegal online, equivalent mechanisms should be put in place to combat online illegality, that does not compute either. As we shall see, the Bill's approach differs significantly from offline procedures for determining the illegality of individual speech - not just in form and process, but in the substantive standards to be applied.  

Perhaps in implicit recognition of these inconvenient truths, the government’s favoured slogan has undergone many transformations:

- “We will be consistent in our approach to regulation of online and offline media.” (Conservative Party Manifesto, 18 May 2017)

- “What is unacceptable offline should be unacceptable online.” (Internet Safety Strategy Green Paper, October 2017)

- “Behaviour that is illegal offline should be treated the same when it’s committed online.” (Then Digital Minister Margot James, 1 November 2018)

- “A world in which harms offline are controlled but the same harms online aren’t is not sustainable now…” (Then Culture Secretary Jeremy Wright QC, 21 February 2019

- “For illegal harms, it is also important to ensure that the criminal law applies online in the same way as it applies offline” (Online Harms White Paper, April 2019)

- "Of course... what is illegal offline is illegal online, so we have existing laws to deal with it." (Home Office Lords Minister Baroness Williams, 13 May 2020)

- “If it’s unacceptable offline then it’s unacceptable online” (DCMS, tweet 15 December 2020)

- "If it is illegal offline, it is illegal online.” (Then Culture Secretary Oliver Dowden, House of Commons 15 December 2020)

- “The most important provision of [our coming online harms legislation] is to make what's illegal on the street, illegal online” (Then Culture Secretary Oliver Dowden, 29 March 2021)

- “What's illegal offline should be regulated online.” (Damian Collins, then Chair of the Joint Pre-Legislative Scrutiny Committee, 14 December 2021)

- “The laws we have established to protect people in the offline world, need to apply online as well.” (Then former DCMS Minister Damian Collins MP, 2 Dec 2022

Now, extolling its newly revised Bill, the government has reverted to simplicity. DCMS’s social media infographics once more proclaim that ‘What is illegal offline is illegal online’.

The underlying message of the slogan is that the Bill brings online and offline legality into alignment. Would that also mean that what is legal offline is (or should be) legal online?  The newest Culture Secretary Michelle Donelan appeared to endorse that when announcing the abandonment of ‘legal but harmful to adults’: "However admirable the goal, I do not believe that it is morally right to censor speech online that is legal to say in person." 

Commendable sentiments, but does the Bill live up to them? Or does it go further and make illegal online some of what is legal offline? I suggest that in several respects it does do that.

Section 127 – the online-only criminal offence

First, consider illegality in its most commonly understood sense: criminal offences.

The latest version of the Bill scraps the previously proposed new harmful communications offence, reinstating S.127(1) of the Communications Act 2003 which it would have replaced. The harmful communications offence, for all its grievous shortcomings, made no distinction between offline and online. S.127(1), however, is online only. Moreover, it is more restrictive than any offline equivalent.

S.127(1) is the offence that, notoriously, makes it an offence to send by means of a public electronic communications network a “message or other matter that is grossly offensive or of an indecent, obscene or menacing character”. It is difficult to be sure of its precise scope – indeed one of the main objections to it is the vagueness inherent in ‘grossly offensive’. But it has no direct offline counterpart. 

The closest equivalent is the Malicious Communications Act 1988, also now to be reprieved. The MCA applies to both offline and online communications. Whilst like S.127(1) it contains the ‘grossly offensive” formulation, it is narrower by virtue of a purpose condition that is absent in S.127(1). Also the MCA offence appears not to apply to generally available, non-targeted postings on an online platform (Law Commission Scoping Report 2018, paras 4.26 to 4.29). That leaves S.127(1) not only broader in substance, but catching many kinds of online communication to which the MCA does not apply at all.

Para 4.63 of the Law Commission Scoping Report noted: “Indeed, as subsequent Chapters will illustrate, section 127 of the CA 2003 criminalises many forms of speech that would not be an offence in the “offline” world, even if spoken with the intention described in section 127.”

For S.127(1) that situation will be continued - at least while the government gives further consideration to the criminal law on harmful communications. But although the new harmful communications offence was rightly condemned, was the government really faced with having to make a binary choice between frying pan and fire?

Online liability to have content filtered or removed

Second, we have illegality in terms of ‘having my content compulsorily removed’.

This is not illegality in the normal sense of liability to be prosecuted and found guilty of a criminal offence. Nor is it illegality in the sense of being sued and found liable in the civil courts. It is more akin to an author having their book seized with no further sanction. We lawyers may debate whether this is illegality properly so called. To the user whose online post is filtered or removed it will certainly feel like it, even though no court has declared the content illegal or ordered its seizure.

The Bill creates this kind of illegality (if it be such) in a novel way: an online post would be filtered or removed by a platform because it is required to do so by virtue of a preventative or reactive duty of care articulated in the Bill. This creature of statute has - for speech - no offline equivalent. See discussion here and here

The online-offline asymmetry does not stop there. If we dig more deeply into a comparison with criminal offences we find other ways in which the Bill’s illegality duty treats online content more restrictively than offline. 

Two features stand out, both stemming from the Bill's recently inserted clause setting out how online platforms should adjudge the illegality of users' content.

The online illegality inference engine

First, in contrast to the criminal standard of proof – beyond reasonable doubt – the platform is required to find illegality if it has ‘reasonable grounds to infer’ that the elements of the offence are present.  That applies both to factual elements and to any required purpose, intention or other mental element.

The acts potentially constituting an offence may be cast widely, in which event the most important issues are likely to be intent and whether the user has an available defence (such as, in some cases, reasonable excuse). 

Under the Bill, unless the platform has information on the basis of which it can infer that a defence may successfully be relied on, the possibility of a defence is to be left out of consideration.  That leads into the second feature.

The online information vacuum

The Bill requires platforms to determine illegality on the basis of information reasonably available to them. But how much (or little) information is that likely to be?  

Platforms will be required to make decisions on illegality in a comparative knowledge vacuum. The paucity of information is most apparent in the case of proactive, automated real time filtering. A system can work only on user content that it has processed, which inevitably omits extrinsic contextual information. 

For many offences, especially those in which defences such as reasonable excuse bear the main legality burden, such absent contextual information would otherwise be likely to form an important, even decisive, part of determining whether an offence has been committed. 

For both of these reasons the Bill’s approach to online would inevitably lead to compulsory filtering and removal of legal online content at scale, in a way that has no counterpart offline. It is difficult to see how a requirement on platforms to have regard (or particular regard, as a proposed government amendment would have it) to the importance of protecting users’ right to freedom of expression within the law could act as an effective antidote to express terms of the legislation that spell out how platforms should adjudge illegality.

Online prior restraint

These two features exist against the background that the illegality duty is a form of prior restraint: the Bill requires content filtering and removal decisions to be made before any fully informed, fully argued decision on the merits takes place (if it ever would). A presumption against prior restraint has long formed part of the English common law and of human rights law. For online, no longer.



Friday, 25 November 2022

How well do you know the Online Safety Bill?

With the Online Safety Bill returning to the Commons next month, this is an opportune moment to refresh our knowledge of the Bill.  The labels on the tin hardly require repeating: children, harm, tech giants, algorithms, trolls, abuse and the rest. But, to beat a well-worn drum, what really matters is what is inside the tin. 

Below is a miscellany of statements about the Bill: familiar slogans and narratives, a few random assertions, some that I have dreamed up to tease out lesser-known features. True, false, half true, indeterminate? Click on the expandable text to find out.  

The Bill makes illegal online what is illegal offline.
No. We have to go a long way to find a criminal offence that does not already apply online as well as offline (other than those such as driving a car without a licence, which by their nature can apply only to the physical world). One of the few remaining anomalies is the paper-only requirement for imprints on election literature – a gap that will be plugged when the relevant provisions of the Elections Act 2022 come into force.

Moreover, in its fundamentals the Bill departs from the principle of online-offline equivalence. Its duties of care are extended in ways that have no offline comparable. It creates a broadcast-style Ofcom regulatory regime that has no counterpart for individual speech offline: regulation by discretionary regulator rather than by clear, certain, general laws.

The real theme underlying the Bill is far removed from offline-online equivalence. It is that online speech is different from offline: more reach, more persistent, more dangerous and more in need of a regulator’s controlling hand.

Under the Bill's safety duty, before removing a user's post a platform will have to be satisfied to the criminal standard that it is illegal.
No. The current version of the Bill sets ‘reasonable grounds to infer’ as the platform’s threshold for adjudging illegality.

Moreover, unlike a court that comes to a decision after due consideration of all the available evidence on both sides, a platform will be required to make up its (or its algorithms') mind about illegality on the basis of whatever information is available to it, however incomplete that may be. For proactive monitoring of ‘priority offences’, that would be the user content processed by the platform’s automated filtering systems. The platform would also have to ignore the possibility of a defence unless they have reasonable grounds to infer that one may be successfully relied upon.

The mischief of a low threshold is that legitimate speech will inevitably be suppressed at scale under the banner of stamping out illegality. In a recent House of Lords debate Lord Gilbert, who chaired the Lords Committee that produced a Report on Freedom of Expression in the Digital Age, asked whether the government had considered a change in the standard from “reasonable grounds to believe” to “manifestly illegal”.  The government minister replied by referring to the "reasonable grounds to infer" amendment, which he said would protect against both under-removal and over-removal of content.

The Bill will repeal the S.127 Communications Act 2003 offences.
Half true. Following a recommendation by the England and Wales Law Commission the Bill will replace both S.127 (of Twitter Joke Trial notoriety) and the Malicious Communications Act 1988 with new offences, notably sending a harmful communication.

However, the repeal of S.127 is only for England and Wales. S.127 will continue in force in Scotland. As a result, for the purposes of a platform’s illegality safety duty the Bill will deem the remaining Scottish S.127 offence to apply throughout the UK. So in deciding whether it has reasonable grounds to infer illegality a platform would have to apply both the existing S.127 and its replacement. [Update: the government announced on 28 November 2022 that the 'grossly offensive' offences under S.127(1) and the MCA 1988 will no longer be repealed, following its decision to drop the new harmful communications offence.] 

A platform may be required to adjudge whether a post causes spiritual injury.
True. The National Security Bill will create a new offence of foreign interference. One route to committing the offence involves establishing that the conduct involves coercion. An example of coercion is given as “causing spiritual injury to, or placing undue spiritual pressure on, a person”.

The new offence would be designated as a priority offence under the Online Safety Bill, meaning that platforms would have to take proactive steps to prevent users encountering such content.

A platform may be required to adjudge whether a post represents a contribution to a matter of public interest.
True. The new harmful communications offence (originating from a recommendation by the Law Commission) provides that the prosecution must prove, among other things, that the sender has no reasonable excuse for sending the message. Although not determinative, one of the factors that the court must consider (if it is relevant in a particular case) is whether the message is, or is intended to be, a contribution to a matter of public interest.

A platform faced with a complaint that a post is illegal by virtue of this offence would be put in the position of making a judgment on public interest, applying the standard of whether it has reasonable grounds to infer illegality. During the Commons Committee stage the then Digital Minister Chris Philp elaborated on the task that a platform would have to undertake. It would, he said, perform a "balancing exercise" in assessing whether the content was a contribution to a matter of public interest. [Update: the government announced on 28 November 2022 that the proposed new harmful communications offence will be dropped.]

The House of Lords Communications and Digital Committee Report on Freedom of Speech in the Digital Age contains the following illuminating exchange: 'We asked the Law Commission how platforms’ algorithms and content moderators could be expected to identify posts which would be illegal under its proposals. Professor Lewis told us: “We generally do not design the criminal law in such a way as to make easier the lives of businesses that will have to follow it.”' However, it is the freedom of speech of users, not businesses, that is violated by the arbitrariness inherent in requiring platforms to adjudge vague laws.

Platforms would be required to filter users’ posts.
Highly likely, at least for for some platforms. All platforms would be under a duty to take proportionate proactive steps to prevent users encountering priority illegal content, and (for services likely to be accessed by children) to prevent children from encountering priority content harmful to children. The Bill gives various examples of such steps, ranging from user support to content moderation, but the biggest clues are in the Code of Practice provisions and the enforcement powers granted to Ofcom.

Ofcom is empowered to recommend in a Code of Practice (if proportionate for a platform of a particular kind or size) proactive technology measures such as algorithms, keyword matching, image matching, image classification or behaviour pattern detection in order to detect publicly communicated content that is either illegal or harmful to children. Its enforcement powers similarly include use of proactive technology. Ofcom would have additional powers to require accredited proactive technology to be used in relation to terrorism content and CSEA (including, for CSEA, in relation to private messages).

The Bill regulates platforms, not users.
False dichotomy. The Bill certainly regulates platforms, but does so by pressing them into service as proxies to control content posted by users. The Bill thus regulates users at one remove. It also contains new criminal offences that would be committed directly by users.

The Bill outlaws hurting people's feelings.
No, but the new harmful communications offence comes close. It would criminalise sending, with no reasonable excuse, a message carrying a real and substantial risk that it would cause psychological harm - amounting to at least serious distress - to a likely member of the audience, with the intention of causing such harm. There is no requirement that the response of a hypothetical seriously distressed audience member should be reasonable. One foreseeable hypersensitive outlier is enough. Nor is there any requirement to show that anyone was actually seriously distressed.

The Law Commission, which recommended this offence, considered that it would be kept within bounds by the need to prove intent to cause harm and the need to prove lack of reasonable excuse, both to the criminal standard. However, the standard to which platforms will operate in assessing illegality is reasonable grounds to infer[Update: the government announced on 28 November 2022 that the proposed new harmful communications offence will be dropped.]

The Bill also refers to psychological harm in other contexts, but without defining it further. The government intends that psychological harm should not be limited to a medically recognised condition.

The Bill recriminalises blasphemy.
Quite possibly. Blasphemy was abolished as a criminal offence in England and Wales in 2008 and in Scotland in 2021. The possible impact of the harmful communications offence (see previous item) has to be assessed against the background that people undoubtedly exist who experience serious distress (or at least claim to do so) upon encountering content that they regard as insulting to their religion. [Update: the government announced on 28 November 2022 that the proposed new harmful communications offence will be dropped.]

The Bill is all about Big Tech and large social media companies.
No. Whilst the biggest “Category 1” services would be subject to additional obligations, the Bill’s core duties would apply to an estimated 25,000 UK service providers from the largest to the smallest, and whether or not they are run as businesses. That would include, for instance, discussion forums run by not-for-profits and charities. Distributed social media instances operated by volunteers also appear to be in scope.

The Bill is all about algorithms that push and amplify user content.
No. The Bill makes occasional mention of algorithms, but the core duties would apply regardless of whether a platform makes use of algorithmic curation. A plain vanilla discussion forum is within scope.

The Secretary of State can instruct Ofcom to modify its Codes of Practice.
True. Section 40 of the Bill empowers the Secretary of State to direct OFCOM to modify a draft code of practice if the Secretary of State believes that modifications are required (a) for reasons of public policy, or (b) in the case of a terrorism or CSEA code of practice, for reasons of national security or public safety. The Secretary of State can keep sending the modified draft back for further modification.

A platform will be required to remove content that is legal but harmful to adults.
No. The legal but harmful to adults duty (should it survive in the Bill) applies only to Category 1 platforms and on its face only requires transparency. Some have argued that its effect will nevertheless be heavily to incentivise Category 1 platforms to remove such content. [Update: the government announced on 28 November 2022 that the legal but harmful to adults duty will be dropped.]

The Bill is about systems and processes, not content moderation.
False dichotomy. Whilst the Bill's illegality and harm to children duties are couched in terms of systems and processes, it also lists measures that a service provider is required to take or use to fulfil those duties, if it is proportionate to do so. Content moderation, including taking down content, is in the list. It is no coincidence that the government’s Impact Assessment estimates additional moderation costs over a 10 year period at nearly £2 billion.

Ofcom could ban social media quoting features.
Indeterminate. Some may take the view that enabling social media quoting encourages toxic behaviour (the reason why the founder of Mastodon did not include a quote feature). A proponent of requiring more friction might argue that it is the kind of non-content oriented feature that should fall within the ‘safety by design’ aspects of a duty of care - an approach that some regard as preferable to moderating specific content.

Ofcom deprecation of a design feature would have to be tied to some aspect of a safety duty under the Bill and perhaps to risk of physical or psychological harm. There would likely have to be evidence (not just an opinion) that the design feature in question contributes to a relevant kind of risk within the scope of the Bill. From a proportionality perspective, it has to be remembered that friction-increasing proposals typically strike at all kinds of content: illegal, harmful, legal and beneficial.  

Of course the Bill does not tell us which design features should or should not be permitted. That is in the territory of the significant discretion (and consequent power) that the Bill places in the hands of Ofcom. If it were considered to be within scope of the Bill and proportionate to deprecate a particular design feature, in principle Ofcom could make a recommendation in a Code of Practice. That would leave it to the platform either to comply or to explain how it satisfied the relevant duty in some other way. Ultimately Ofcom could seek to invoke its enforcement powers.

The Bill will outlaw end to end encryption.
Not as such, but... . Ofcom will be given the power to issue a notice requiring a private messaging service to use accredited technology to scan for CSEA material. A recent government amendment to the Bill provides that a provider given such a notice has to make such changes to the design or operation of the service as are necessary for the technology to be used effectively. That opens the way to requiring E2E encryption to be modified if it is incompatible with the accredited technology - which might, for instance, involve client-side scanning.  Ofcom can also require providers to use best endeavours develop or source their own scanning technology.

The government’s response to the Pre-legislative Scrutiny Committee is also illuminating: “End-to-end encryption should not be rolled out without appropriate safety mitigations, for example, the ability to continue to detect known CSEA imagery.” 

The press are exempt.
True up to a point, but it’s complicated.

First, user comments under newspaper and broadcast stories are intended to be exempt as ‘limited functionality’ under Schedule 1 (but the permitted functionality is extremely limited, for instance apparently excluding comments on comments).

Second, platforms' safety duties do not apply to recognised news publisher content appearing on their services. However, many news and other publishers will fall outside the exemption. 

Third, various press and broadcast organisations are exempted from the new harmful and false communications offences created by the Bill. 
[Update: the government announced on 28 November 2022 that the proposed new harmful communications offence will be dropped.]

[Updated 3 December 2022 to take account of the government announcement on 28 November 2022.]

Wednesday, 2 November 2022

On the Dotted Line

The topic of electronic signatures seems cursed to eternal life. In the blue corner we have the established liberal English law approach to signatures, which eschews formality and emphasises intention to authenticate. In the red corner we have preoccupation with verifying identity of the signatory, with technically engineered digital signatures and with the EU’s eIDAS hierarchy of qualified, advanced and ordinary electronic signatures.

In the English courts the blues have it. Judges have upheld the validity of electronic signatures as informal as signing a name at the end of an e-mail or even, in one case, clicking an ‘I accept’ button on an electronic form. They have been able to do this partly because, with very few exceptions, the England and Wales legislature has refrained from stipulating use of an eIDAS-compliant qualified or advanced signature as a condition of validity. The EIDAS hierarchy does form part of our law, but – rather like the Interpretation Act - in the guise of a toolkit that is available to be used or not as the legislature wishes. The toolkit has for the most part remained on the legislative shelf.

The potential consequences of stipulating eIDAS-style formalities in legislation are graphically illustrated by the Austrian case of the Wrong Kind of Signature. A €3bn contract to supply double-decker trains to Austrian Federal Railways was invalidated because the contract was signed with a qualified electronic signature supported by a Swiss, rather than an EU, Trusted Service Provider.

The modern English law aversion to imposition of formalities was pithily encapsulated in an official committee report of 1937, describing the Statute of Frauds:

““'The Act', in the words of Lord Campbell . . . 'promotes more frauds than it prevents'. True it shuts out perjury; but it also and more frequently shuts out the truth. It strikes impartially at the perjurer and at the honest man who has omitted a precaution, sealing the lips of both. Mr Justice FitzJames Stephen ... went so far as to assert that 'in the vast majority of cases its operation is simply to enable a man to break a promise with impunity, because he did not write it down with sufficient formality.’ ”

For its part eIDAS continues to complicate and confound. February’s Interim Report of the Industry Working Group on the Electronic Execution of Documents, running to 94 pages of discussion, stated that ‘only’ qualified electronic signatures have equivalent legal status to handwritten signatures (meaning, according to the Report, that they carry a presumption of authenticity). Yet while eIDAS does require equivalent legal effect (whatever that may mean) to be accorded to qualified signatures, it does not require other kinds of electronic signature to be denied that status; nor has English domestic law done so.

Back in the courts, a recent decision of Senior Costs Judge Gordon-Saker in Elias v Wallace LLP [2022] EWHC 2574 (SCCO) continues down the road of upholding the validity of informal electronic signatures. Under the Solicitors Act 1974 (as amended) a solicitor’s bill cannot be enforced by legal proceedings unless it complies with certain formalities, including that it has to be:

“(a) signed by the solicitor or on his behalf by an employee of the solicitor authorised by him to sign, or

(b) enclosed in, or accompanied by, a letter which is signed as mentioned in paragraph (a) and refers to the bill.”

The Act states that the signature may be an electronic signature. It takes its definition of electronic signature from s.7(2) of the Electronic Communications Act 2000[1], as amended:  

“… so much of anything in electronic form as –

(a)   is incorporated into or otherwise logically associated with any electronic communication or electronic data; and

(b)   purports to be used by the individual creating it to sign.”

This is an unusual example of English legislation stipulating compliance with a defined kind of signature (albeit that S.7(2) is framed in very broad terms) as a condition of validity. Most legislation requiring a signature goes no further than a generally stated requirement that the document must be signed[2].

The bills in question were sent to the solicitor’s client as e-mail attachments. The bills themselves were not signed, but the covering e-mails concluded with the words:

“Best regards,

Alex

[first name and surname]

Partner

[telephone numbers, firm name and physical and website addresses]”.

The judge held:

  1. The printed name of the firm incorporated in the invoice, like a letterheading, was not a signature. This unsurprising conclusion is reminiscent of Mehta v J Pereira Fernandes SA [2006] EWHC 813 in which the same was held for an e-mail address appearing at the top of an e-mail.
  2. If the name ‘Alex’ was not generated automatically, clearly it purported to be used as a signature.
  3. If the name ‘Alex’ was auto-generated, then on the authority of Neocleous v Rees that would constitute a signature. The e-mail footer was clearly applied with authenticating intent, even if it was the product of a rule.

The judge also held that ‘letter’ should be interpreted to include e-mail. That is a salutary reminder that the ability to conduct a transaction electronically may not be only a question of whether electronic signatures are permissible. Other requirements of form and process can also come into play.

[1] Note that the role of S.7 was to make explicit (almost certainly unnecessarily) that electronic signatures as defined by the section were admissible as evidence, whereas the Solicitors Act provision concerns substantive validity.

[2] As to which, see the England and Wales Law Commission’s Statement of the Law in its Report on Electronic Execution of Documents (2019).