Monday, 31 January 2022

Internet legal developments to look out for in 2022

Another instalment of my annual round-up of what is on the horizon for UK internet law [Updated 29 April and 2 November 2022]. It does stray a little beyond our shores, noting some significant EU developments (pre-Brexit habits die hard). As always, it does not include data protection (too big, not really my field).

Draft Online Safety Bill The UK government published its draft Online Safety Bill in May 2021. The Parliamentary Joint Pre-Legislative Scrutiny Committee published its report on the draft Bill on 14 December 2021. A sub-committee of the Commons DCMS Select Committee also published a report on 24 January 2022, as did the Lords Communications and Digital Committee Inquiry on Freedom of Expression Online on 22 July 2021.

The government is expected to introduced a Bill into Parliament by on 17 March 2022. The Bill had its Second Reading on 19 April 2022. Its Report Stage is paused, likely to be recommenced this month.  Among many things for which the draft legislation is notable, its abandonment of the ECD Article 15 prohibition on general monitoring obligations stands out.

EU Digital Services Act The European Commission published its proposals for a Digital Services Act and a Digital Markets Act on 15 December 2020. The proposed Digital Services Act includes replacements for Articles 12 to 15 of the ECommerce Directive.  Following a vote in the European Parliament on 20 January 2022, the proposed legislation will now entered the trilogue stage. Political agreement was reached on 23 April 2022. The final text was published in the Official Journal on 27 October 2022.

Terrorist content The EU Regulation on addressing the dissemination of terrorist content online will come into effect on 7 June 2022.

Erosion of intermediary liability shields by omission One by-product of Brexit is that the UK is no longer bound to implement the conduit, caching and hosting shields provided by the EU eCommerce Directive. The government says that it “is committed to upholding the liability protections now that the transition period has ended”.

However, implementation of that policy requires every new piece of legislation that could impose liability on an intermediary explicitly to include the protections. If that is not done, then, owing to the fact that the original Electronic Commerce Directive Regulations 2002 do not have prospective effect, the protections will not apply to that new source of liability.

Two examples are already progressing though Parliament: the statutory codification of the public nuisance offence in the Policing Bill (which, following Royal Assent, came into force on 26 June 2022), and the electronic election imprints offences in the Elections Bill (Royal Assent 28 April 2022, not yet in force), neither of which includes the conduit, caching and hosting shields.

Such omissions have been known in the past, and were cured by statutory instrument under the European Communities Act 1972. That option is no longer available. As time goes on, accretion of such omissions in new legislation will gradually erode the intermediary protections to which the government is committed.

Law Commission Reports The Law Commission has issued two Reports making recommendations that are relevant to online speech. The first is its Report on Reform of the Communications Offences (notably, recommending replacing S.127 Communications Act 2003 and  the Malicious Communications Act 1988 with a new harm-based offence). The second report is on Hate Crime Laws. The recommendations on communications offences, at least, are being considered for incorporation have been included in the Online Safety Bill.

Copyright The Polish government’s challenge to Article 17 (Poland v Parliament and Council, Case C-401/19) is pending was decided on 26 April 2022. Poland argued that Article 17 makes it necessary for OSSPs, in order to avoid liability, to carry out prior automatic filtering of content uploaded online by users, and therefore to introduce preventive control mechanisms. It contended that such mechanisms undermine the essence of the right to freedom of expression and information and do not comply with the requirement that limitations imposed on that right be proportionate and necessary.

The Advocate-General’s Opinion was delivered on 15 July 2021. It was something of an Opinion of Solomon: recommending that the challenge be rejected, but only on the basis that the Directive is implemented in a way that minimises false positives. The Advocate General also, in a postscript, challenged aspects of the Article 17 guidance issued by the Commission subsequent to the drafting of the Opinion. The judgment largely followed the Opinion, dismissing the challenge but on the basis of an interpretation of Article 17 that included strict safeguards against removal of lawful content.

Policing Bill The Police, Crime, Sentencing and Courts Bill has ignited significant controversy over its impact on street protests, including through its statutory codification of the common law offence of public nuisance. The potential application of the new statutory offence to online speech, however, has gone virtually unnoticed.  

Product Security and Telecommunications Infrastructure Bill An honourable mention for this Bill: a framework for imposing all kinds of security requirements on (among other things) internet-connectable products.

Back from the dead? The Digital Economy Act 2017 The non-commencement of the age verification provisions of the Digital Economy Act 2017 has long been a source of controversy. In November 2021 the High Court gave permission to two members of the public to commence judicial review proceedings. This may now in practice have been overtaken by the inclusion of pornography sites in the Online Safety Bill.

Cross-border data access The US and the UK signed a Data Access Agreement on 3 October 2019, providing domestic law comfort zones for service providers to respond to data access demands from authorities located in the other country. No announcement has yet been made that Agreement has entered into operation. It came into force on 3 October 2022.

The Second Additional Protocol to the Convention on Cybercrime on enhanced co-operation and disclosure of electronic evidence is was open for signature from 12 May 2022 and presented to the UK Parliament in July 2022.

State communications surveillance The kaleidoscopic mosaic of cases capable of affecting the UK’s 
Investigatory Powers Act 2016 (IP Act) continues to reshape itself. In this field CJEU judgments will continue to be relevant in principle, since they form the backdrop to future reviews of the European Commission’s June 2021 UK data protection adequacy decision.

Domestically, Liberty has a pending judicial review of the IP Act bulk powers and data retention powers. Some EU law aspects (including bulk powers) were stayed pending the Privacy International reference to the CJEU. Those aspects are now proceeding and, according to Liberty, are likely to be in court in early 2022. The Divisional Court rejected the claim that the IP Act data retention powers provide for the general and indiscriminate retention of traffic and location data, contrary to EU law. That point may in due course come before the Court of Appeal. The Divisional Court gave judgment on the stayed aspects on 24 June 2022. Liberty's claims were rejected except for one aspect concerning the need for prior independent authorisation for access to some retained data. 

Investigatory Powers Act review The second half of 2022 will see the Secretary of State preparing the report on the operation of the IP Act required under Section 260 of the Act.

Electronic transactions The pandemic focused attention on legal obstacles to transacting electronically and remotely. Whilst uncommon in commercial transactions, some impediments do exist and, in a few cases, were temporarily relaxed. That may pave the way for permanent changes in due course.

Although the question typically asked is whether electronic signatures can be used, the most significant obstacles tend to be presented by surrounding formalities rather than signature requirements themselves. A case in point is the physical presence requirement for witnessing deeds, which stands in the way of remote witnessing by video or screen-sharing. The Law Commission Report on Electronic Execution of Documents recommended that the government should set up an Industry Working Group to look at that and other issues. The Working Group has now been formed. It issued an Interim Report on 1 February 2022.

[Updated 29 April 2022 and 2 November 2022.]



Monday, 22 November 2021

Licence to chill

To begin with, a confession. I should probably have paid more attention to the Law Commission’s project on reforming communications offences.  The Commission published its Final Report in July 2021, recommending new offences to replace S.127 Communications Act 2003 and the Malicious Communications Act 1988.  

Now that the government has indicated that it is minded to accept the Law Commission’s recommendations, a closer – even if 11th hour - look is called for: doubly so, since under the proposed Online Safety Bill a service provider would be obliged to take steps to remove user content if it has “reasonable grounds to believe” that the content is illegal. The two provisions would thus work hand in glove. [The Bill as introduced to Parliament omitted the "reasonable grounds to believe" threshold. It was silent as to what standard a service provider should apply to adjudge illegality. "Reasonable grounds to infer" is now being introduced by a government amendment at Report Stage.] 

There is no doubt that S.127, at any rate, is in need of reform. The question is whether the proposed replacement is an improvement. Unfortunately, that closer look suggests that the Law Commission’s recommended harm-based offence has significant problems. These arise in particular for a public post to a general audience. 

The proposed new offence

The elements of the Law Commission’s proposed new offence are:

(1) the defendant sent or posted a communication that was likely to cause harm to a likely audience;

(2) in sending or posting the communication, the defendant intended to cause harm to a likely audience; and

(3) the defendant sent or posted the communication without reasonable excuse.

(4) For the purposes of this offence:

(a) a communication is a letter, article, or electronic communication;

(b) a likely audience is someone who, at the point at which the communication was sent or posted by the defendant, was likely to see, hear, or otherwise encounter it; and

(c) harm is psychological harm, amounting to at least serious distress.

(5) When deciding whether the communication was likely to cause harm to a likely audience, the court must have regard to the context in which the communication was sent or posted, including the characteristics of a likely audience.

(6) When deciding whether the defendant had a reasonable excuse for sending or posting the communication, the court must have regard to whether the communication was, or was meant as, a contribution to a matter of public interest.

The Law Commission goes on to recommend that “likely” be defined as “a real or substantial risk”. This requires no further explanation for “likely to cause harm”. For “a likely audience”, it would mean a real or substantial risk of seeing, hearing, or otherwise encountering the communication. (Report [2.119])

Psychological harm

The challenge for any communications offence based on harm to a reader is how to reconcile the need for an objective rule governing speech with the subjectivity of how speech is perceived. The cost of getting it wrong is that we end up with a variation on the heckler’s veto: speech chilled by fear of criminal liability arising from the bare assertion of a claim to have suffered harm.

The focus on likelihood of ‘psychological harm’ as the criterion for the recommended offence has provoked criticism on grounds of subjectivity. It is notorious that protagonists in controversial areas of debate may claim to be traumatised by views with which they are in deep disagreement. The very kinds of speech that are meant to have the greatest freedom of of expression protections – political and religious – are perhaps those in relation to which that kind of claim is most likely to be made.

The Law Commission would argue that the recommended offence revolves around whether relevant harm is likely to be caused to someone likely to encounter the communication in question (the ‘conduct element’ of the offence). Harm has to be both likely and serious. A bare claim to have suffered harm would therefore not of itself demonstrate that harm was likely or serious, since a complainant might be unforeseeably sensitive. Additionally, the prosecution would have to show that the communication was made without reasonable excuse and that the defendant intended to harm someone likely to encounter the communication.

Kinds of audience

The Law Commission stresses that the offence would focus on the factual context within which the communication took place. Thus, the likelihood of a private communication sent to one person causing harm would be adjudged most obviously according to the characteristics of the intended recipient. If there was a real or substantial risk that the intended recipient would suffer harm, then (whether or not the intended recipient actually suffered harm) the conduct element would be made out. Further, if it was likely (at the point of sending the communication) that someone other than the intended recipient would also see the communication, then it would be relevant to consider whether that other person would be likely to suffer harm from doing so, taking into account their characteristics.

A similar analysis would apply to a group of readers. A post to a forum dedicated to disability issues would be likely to be read by people with disabilities. That characteristic would be taken into account, with the result that a likely audience would be likely to be caused serious distress by a hate post about disabled people. The Law Commission Consultation Paper applies that logic to the example of a tweet directed to a well-known disability charity by means of the ‘@’ function. The likely audience would primarily be the charity and its followers, many of whom could be assumed to have a disability.

How, though, should this analysis be applied to a public post to a general audience? What would be the relevant characteristics of a likely audience? How are those to be determined when no particular kind of individual is especially likely to encounter the post?

Does the general nature of the audience mean that the risk of satisfying the conduct element is reduced, because no particular relevant characteristics of an audience can be identified? Or is the risk increased, as the larger the audience the more likely it is to contain at least one person with characteristics such that they are likely to suffer harm? Since the draft offence refers to ‘someone’, one likely person appears to be sufficient to amount to a likely audience. The Consultation Paper at [5.124], discussing ‘likely audience’ in the context of the then proposed mental element of the offence, adopts that position.

The Law Commission Report does not fully address the question of the characteristics of a general audience. It responded to submissions raising concerns on the question of public posts by rejecting suggestions that a “reasonable person” standard should be applied, on the basis that sufficient protection was provided by the requirement of intent to harm and the need to prove lack of reasonable excuse.

Actual or hypothetical audience?

The uncertainty about the position of public posts to a general audience is exacerbated by lack of clarity over whether the conduct element of the offence requires proof that someone likely to encounter the communication actually did so (in which case the court’s analysis would presumably tend to be focused on the characteristics of the person shown to have encountered it, and the likelihood of their being harmed as a result); or whether it would be sufficient to rely on the mere likelihood of someone encountering it (in which case the court would appear to have to decide what characteristics to attribute to a hypothetical likely member of the audience).

If the latter, then at least for a public post to a general audience the relevant factual context - a feature of the proposed offence on which the Law Commission places considerable reliance -  would seem, as regards the characteristics of the hypothetical person likely to suffer harm, to have to be constructed in the minds of the judge or jury. 

The Law Commission states that the proposed offence is complete, both for likely harm and likely audience, at the point of sending the communication (Rep 2.56, 2.91, 2.117). On that logic it should not matter if no-one can be shown actually to have been harmed or actually to have encountered the communication. Proof of likelihood should suffice for both.      

The Law Commission also says (Rep 2.256) that:

“where a communication was sent or posted from a device to a social media platform, but was not made visible by that platform (perhaps because of preventative algorithms), it could be impossible for the offence to be made out because the prosecution would have to prove that there was a likely audience who was at a real and substantial risk of seeing the message. It might be that no one was at a real or substantial risk of seeing the communication (i.e. the likely audience was nobody).”

If the offence is complete at the point of sending, and if sending is the point at which the likely audience is to be determined, what would be the relevance of the post subsequently being blocked by the platform upon receipt? Does the likelihood of the post being blocked have to be considered? So could the offence still be committed if the post was unlikely to be blocked, but in fact was? Or, conversely, would the offence not be committed if the post was likely to be blocked, but slipped through? 

Such conundrums apart, the more hypothetical the conduct element of the offence, the more significant is the Law Commission’s rejection of a “reasonable person” when considering likelihood of harm. It leaves open the possibility that a notional member of a likely audience could foreseeably be someone of unusual, or even extreme, sensitivity.

Whether the likely audience member contemplated by the offence is actual or notional, as already noted the Law Commission’s intention appears to be that it would suffice if one person in the audience were likely to encounter the communication and likely to suffer harm as a result.

The question of whether the actual presence of someone in the audience has to be proved finds a parallel in offences under the Public Order Act 1986. These differ as to whether they require that a real person could have heard the relevant words, or simply that a hypothetical person could have done so. Thus for S.5(1) Public Order Act physical presence matters: were the words used “within the hearing or sight of a person” likely to be caused harm? The presence of an actual person likely to be caused harm has to be proved; but it does not have to be proved that such person actually heard the words or suffered harm. If the person present did hear them, the likelihood of their suffering relevant harm is judged according to their relevant characteristics. Thus a police officer may be regarded as possessing more fortitude than an ordinary member of the public.

In contrast, the offences of riot, affray and violent disorder under the Public Order Act are all expressly framed by reference to the effect of the conduct on a notional person of reasonable firmness hypothetically present at the scene; with no requirement that such a person be at, or be likely to be at, the scene.

Universal standards

One of the main criticisms of the existing law is that the supposedly objective categories of speech laid down (such as ‘grossly offensive’) are so vague as to be unacceptably subjective in their application by prosecution and the courts. The Law Commission endorses that criticism.  It rejects as unworkable universal standards for categories of speech, in favour of a factually context-specific harm-based approach.  

Yet a completely hypothetical interpretation of the Law Commission’s proposed offence could require the court to carry out an exercise – attributing characteristics to a notional member of a general audience - as subjective as that for which the existing offences (or at least s.127) are rightly criticised.

The Law Commission emphasises that “likely” harm means a “real or substantial risk”, not a mere risk or possibility. But if the assumed victim is a notional rather than an actual member of a general audience where does that lead, if not into the forbidden territory of inviting the court to divine universal standards: a set of attributes with which a notional member of the audience has to be clothed?

Claims to have suffered actual harm

The converse of the Law Commission’s emphasis on “likely harm” is that if someone claims to have suffered harm from encountering the communication, or indeed proves that they actually have done so, that should not be conclusive.

In practice, as the Law Commission has acknowledged, evidence of actual harm to an actual person may count towards likelihood of harm (but may not be determinative). (Consultation Paper [5.90])

Thus the Law Commission states that “the mere fact that someone was harmed does not imply that harm was likely … the jury or magistrate will have to determine as a matter of fact that, at the point of sending, harm was likely. If a person has an extreme and entirely unforeseeable reaction, the element of likely harm will not be satisfied.” (Report [2.107])

However, the Law Commission has also rejected the suggestion that a reasonableness standard should be applied. The result appears to be that if one person of unusual sensitivity, sufficient to be at real or substantial risk of harm, is foreseeably likely to encounter the communication, then the “likely audience” requirement would be satisfied. Hence the significance of the possible argument that the larger the audience of a public post, the more likely that it may contain such a person.

Insertion into an audience

At the level of practical consequences, whichever interpretation of the proposed offence is correct – actual or hypothetical likely audience member – it appears to provide a route for someone to attempt to criminalise someone else’s controversial views by inserting themselves into a likely audience.  The Law Commission accepted the possibility of this tactic (Report [2.153]), but considered that other elements of the offence (the need to prove lack of reasonable excuse and intent to harm) would constitute sufficient protection from criminalisation.

However, whilst it discussed how a court might approach the matter, the Report did not address in detail the possible deterrent effect on continued communication, nor the interaction with the illegality provisions of the draft Online Safety Bill.

How might the tactic work? Let us assume a social media post to a general audience, not about any one person, but expressing views with which others may profoundly disagree – whether the subject matter be politics, religion, or any other area in which some may claim to be traumatised by views that they find repugnant.

Would such a communication be at risk of illegality if the audience is likely to contain someone who would find what was said severely distressing? The Law Commission’s answer is ‘No’: not because one sensitive person in a general audience is not enough, but first of all because the necessary intent to cause severe distress to a likely audience member would be lacking; and second, because ordinary (even if highly contentious) political discourse should count as a contribution to a matter of public interest (Consultation Paper [5.185] – [5.187], Report [2.152] – [2.153]).

Nevertheless, it would be an easy matter for someone who objects to the contents of the post to seek to put further communications at risk by entering the conversation. One reply from someone who claims to be severely distressed by the views expressed could create an increased risk (actual or perceived) of committing the offence if the views were to be repeated.

That would be the case whether ‘likely audience’ requires the presence of an actual or hypothetical audience member. If it requires a foreseeable actual audience member, one has now appeared. It could hardly be suggested that, for the future, their presence is not foreseeable. The question for the conduct element would be whether, as claimed, they would be likely to be harmed.

If, on the other hand, the “likely audience” is entirely hypothetical, would an intervention by a real person claiming to be harmed make any difference? There are two reasons to think that it could:

1.      If there were any doubt that it was foreseeable that the audience is likely to contain someone with that degree of sensitivity, that doubt is dispelled.

2.     In practice, as the Law Commission has acknowledged, evidence of actual harm to an actual person may count towards likelihood of harm (but may not be determinative).

On either interpretation of the offence, any further communications would be with knowledge of the audience member and their claim to have been harmed. That would create a more concrete factual context for an argument that likely harm resulting from any further communications was intentional.

Of course, if a further communication were to be prosecuted and go to trial it still might not amount to an offence. The context would have to be examined. Serious distress might not be established.  The prosecution might not be able to prove lack of reasonable excuse. Intent to harm might still not be established.

But that is not really the significant issue where chilling effect is concerned. Rational apprehension of increased risk of committing an offence, by virtue of crystallisation of a likely audience and the claim to harm, would be capable of creating a chilling effect on further communications

The Law Commission may view the need to prove lack of reasonable excuse and intent to harm as fundamental to a court’s consideration. However, someone told that their potential criminal liability for future posts rests on those two criteria might, rationally, see things less diffidently.

If insertion into the audience has not chilled further communication, a further tactical step could be to notify the platform and assert that they have reasonable grounds to believe the continuing posts are illegal. Reasonable grounds (not actual illegality, manifest illegality or even likely illegality) is the threshold that would trigger the platform’s duty to take the posts down swiftly under S.9(3)(d) of the draft Online Safety Bill.

Conclusion

The Law Commission’s proposal draws some inspiration from legislation enacted in 2015 in New Zealand. That, too, is contextual and harm-based. However, the New Zealand offence is firmly anchored in actual harm to an actual identifiable person at whom the communication was targeted, and is qualified by an ‘ordinary reasonable person’ provision. The Law Commission has cut its recommended offence adrift from those moorings. 

That has significant consequences for the scope of the conduct element of the offence, especially when applied to public posts to a general audience. The structure of the conduct element also lends itself to tactical chilling of speech. It is questionable whether these concerns would be sufficiently compensated by the requirement to prove intent to harm and lack of reasonable excuse.

[Unintended negative at end of section 'Psychological harm' corrected 4 Dec 2021; Updated 29 April 2022 to note omission of "reasonable grounds to believe" in Bill as introduced to Parliament; and 11 July 2022 to note introduction of "reasonable grounds to infer" by proposed government amendment at Report Stage.]



Wednesday, 3 November 2021

The draft Online Safety Bill concretised

A.    Introduction 

1.       The draft Online Safety Bill is nothing if not abstract. Whether it is defining the adult (or child) of ordinary sensibilities, mandating proportionate systems and processes, or balancing safety, privacy, and freedom of speech within the law, the draft Bill resolutely eschews specifics.  

2.      The detailing of the draft Bill’s preliminary design is to be executed in due course by secondary legislation, with Ofcom guidance and Codes of Practice to follow. Even at that point, there is no guarantee that the outcome would be clear rules that would enable a user to determine on which side of the safety line any given item of content might fall.

3.      Notwithstanding its abstract framing, the impact of the draft Bill (should it become law) would be on individual items of content posted by users. But how can we evaluate that impact where legislation is calculatedly abstract, and before any of the detail is painted in?

4.      We have to concretise the draft Bill’s abstractions: test them against a hypothetical scenario and deduce (if we can) what might result. This post is an attempt to do that.

B.    A concrete hypothetical

Our scenario concerns an amateur blogger who specialises in commenting on the affairs of his local authority. He writes a series of blogposts (which he also posts to his social media accounts) critical of a senior officer of the local authority, who has previously made public a history of struggling with mental health issues. The officer says that the posts have had an impact on her mental health and that she has sought counselling.

5.      This hypothetical scenario is adapted from the Sandwell Skidder case, in which a council officer brought civil proceedings for harassment under the Protection from Harassment Act 1997 against a local blogger, a self-proclaimed “citizen journalist”.

6.      The court described the posts in that case, although not factually untrue, as a “series of unpleasant, personally critical publications”. It emphasised that nothing in the judgment should be taken as holding that the criticisms were justified. Nevertheless, and not doubting what the council officer said about the impact on her, in a judgment running to 92 paragraphs the court held that the proceedings for harassment stood no reasonable prospect of success and granted the blogger summary judgment.

7.       In several respects the facts and legal analysis in the Sandwell Skidder judgment carry resonance for the duties that the draft Bill would impose on a user to user (U2U) service provider:

a.       The claim of impact on mental health.

b.      The significance of context (including the seniority of the council officer, the council officer’s own previous video describing her struggle with mental health issues; and the legal requirement for there to have been more than a single post by the defendant).

c.       The defendant being an amateur blogger rather than a professional journalist (the court held that the journalistic nature of the blog was what mattered, not the status of the person who wrote it).

d.      The legal requirement that liability for harassment should be interpreted by reference to Art 10 ECHR.

e.       The significance for the freedom of expression analysis of the case being one of publication to the world at large.

f.        The relevance that similar considerations would have to the criminal offence of harassment under the 1997 Act.

8.      Our hypothetical potentially requires consideration of service provider safety duties for illegality and (for a Category 1 service provider) content harmful to adults. (Category 1 service providers would be designated on the basis of being high risk by reason of size and functionality.)

9.      The scenario would also engage service provider duties in respect of some or all of freedom of expression, privacy, and (for a Category 1 service provider) journalistic content and content of democratic importance.

10.   We will assume, for simplicity, that the service provider in question does not have to comply with the draft Bill’s “content harmful to children” safety duty.

C.     The safety duties in summary

11.    The draft Bill’s illegality safety duties are of two kinds: proactive/preventative and reactive.

12.   The general proactive/preventative safety duties under S.9(3)(a) to (c) apply to priority illegal content designated as such by secondary legislation. Although these duties do not expressly stipulate monitoring and filtering, preventative systems and processes are to some extent implicit in e.g. the duty to ‘minimise the presence of priority illegal content’.

13.   It is noteworthy, however, that an Ofcom enforcement decision cannot require steps to be taken “to use technology to identify a particular kind of content present on the service with a view to taking down such content” (S.83(11)).

14.   Our hypothetical will assume that criminally harassing content has been designated as priority illegal content.

15.   The only explicitly reactive duty is under S.9(3)(d), which applies to all in-scope illegal content. The duty sits alongside the hosting protection in the eCommerce Directive, but cast as a positive obligation to remove in-scope illegal content upon gaining awareness of the presence of illegal content, rather than (as in the eCommerce Directive) exposing the provider to potential liability under the relevant substantive law. The knowledge threshold appears to be lower than that in the eCommerce Directive.

16.   There is also a duty under S.9(2), applicable to all in-scope illegality, to take “proportionate steps to mitigate and effectively manage” risks of physical and psychological harm to individuals. This is tied in some degree to the illegal content risk assessment that a service provider is required to carry out. For simplicity, we shall consider only the proactive and reactive illegality safety duties under S.9(3).

17.   Illegality refers to certain types of criminal offence set out in the draft Bill. They would include the harassment offence under the 1997 Act.

18.   The illegality safety duties apply to user content that the service provider has reasonable grounds to believe is illegal, even though it may not in fact be illegal. As the government has said in its Response to the House of Lords Communications and Digital Committee Report on Freedom of Expression in the Digital Age:

Platforms will need to take action where they have reasonable grounds to believe that content amounts to a relevant offence. They will need to ensure their content moderation systems are able to decide whether something meets that test.”

19.   That, under the draft Bill’s definition of illegal content, applies not only to content actually present on the provider’s service, but to kinds of content that may hypothetically be present on its service in the future.

20.  That would draw the service provider into some degree of predictive policing. It also raises questions about the level of generality at which the draft Bill would require predictions to be made and how those should translate into individual decisions about concrete items of content.

21.   For example, would a complaint by a known person about a known content source that passed the ‘reasonable grounds’ threshold concretise the duty to minimise the presence of priority illegal content? Would that require the source of the content, or content about the complainant, to be specifically targeted by minimisation measures? This has similarities to the long running debate about ‘stay-down’ obligations on service providers.

22.  The question of the required level of generality or granularity, which also arises in relation to the ‘content harmful to adults’ duty, necessitates close examination of the provisions defining the safety duties and the risk assessment duties upon which some aspects of the safety duties rest. It may be that there is not meant to be one answer to the question; that it all comes down to proportionality, Ofcom guidance and Codes of Practice.  However, even taking that into account, some aspects remain difficult to fit together satisfactorily. If there is an obvious solution to those, no doubt someone will point me to it.

23.  The “content harmful to adults” safety duty requires a Category 1 service provider to make clear in its terms and conditions how such content would be dealt with and to apply those terms and conditions consistently. There is a question, on the wording of the draft Bill, as to whether a service provider can state that ‘we do nothing about this kind of harmful content’. The government’s position is understood to be that that would be permissible.

24.  The government’s recent Response to the Lords Digital and Communications Committee Report on Freedom of Expression in the Digital Age says:

“Where harmful misinformation and disinformation does not cross the criminal threshold, the biggest platforms (Category 1 services) will be required to set out what is and is not acceptable on their services, and enforce the rules consistently. If platforms choose to allow harmful content to be shared on their services, they should consider other steps to mitigate the risk of harm to users, such as not amplifying such content through recommendation algorithms or applying labels warning users about the potential harm.”

25.  If the government means that considering those “other steps” forms part of the Category 1 service provider’s duty, it is not obvious from where in the draft Bill that might stem.

26.  In fulfilling any kind of safety duty under the draft Bill a service provider would be required to have regard to the importance of protecting users’ right to freedom of expression within the law. Similarly it has to have regard to the importance of protecting users from unwarranted infringements of privacy. (Parenthetically, in the Sandwell Skidder case privacy was held not to be a significant factor in view of the council officer’s own previous published video.)

27.  Category 1 providers would be under further duties to take into account the importance of journalistic content and content of democratic importance when making decisions about how to treat such content and whether to take action against a user generating, uploading or sharing such content.

D.    Implementing the illegality safety duties

Proactive illegality duties: S.9(3)(a) to (c)

28.  We have assumed that secondary legislation has designated criminally harassing content as priority illegal content. The provider has to have systems and processes designed to minimise the presence of priority illegal content, the length of time for which it is present, and the dissemination of such content. Those systems could be automated, manual or both.

29.  That general requirement has to be translated into an actual system or process making actual decisions about actual content. The system would (presumably) have to try to predict the variety of forms of harassment that might hypothetically be present in the future, and detect and identify those that pass the illegality threshold (reasonable grounds to believe that the content is criminally harassing).

30.  Simultaneously it would have to try to avoid false positives that would result in the suppression of user content falling short of that threshold. That would seem to follow from the service provider’s duty to have regard to the importance of protecting users’ right to freedom of expression within the law. For Category 1 service providers that may be reinforced by the journalistic content and content of democratic importance duties. On the basis of the Sandwell Skidder judgment our hypothetical blog should qualify at least as journalistic content.

31.   What would that involve in concrete terms? First, the system or process would have to understand what does and does not constitute a criminal offence. That would apply at least to human moderators. Automated systems might be expected to do likewise. The S.9(3) duty makes no distinction (albeit there appears to be tension between the proactive provisions of S.9(3) and the limitation on Ofcom’s enforcement power in S.83(11) (para 13 above)).

32.  Parenthetically, where harassment is concerned not only the offence under the 1997 Act might have to be understood. Hypothetical content could also have to be considered under any other potentially applicable offences - the S.127 Communications Act offences, say (or their possible replacement by a ‘psychological harm’ offence as recommended by the Law Commission); and the common law offence of public nuisance or its statutory replacement under the Policing Bill currently going through Parliament.

33.  It is worth considering, by reference to some extracts from the caselaw, what understanding the 1997 Act harassment offence might involve:

  •           There is no statutory definition of harassment. It “was left deliberately wide and open-ended” (Majrowski v Guy’s and Thomas’s NHS Trust [2006] ICR 1999)

  •           The conduct must cross “the boundary from the regrettable to the unacceptable” (ibid)

  •           “… courts will have in mind that irritations, annoyances, even a measure of upset, arise at times in everybody’s day-to-day dealings with other people. Courts are well able to recognise the boundary between conduct which is unattractive, even unreasonable, and conduct which is oppressive and unacceptable” (ibid)

  •           Reference in the Act to alarming the person or causing distress is not a definition; it is merely guidance as to one element. (Hayes v Willoughby [2013] 1 WLR 935).

  •           “It would be a serious interference with freedom of expression if those wishing to express their own views could be silenced by, or threatened with, claims for harassment based on subjective claims by individuals that they feel offended or insulted.” (Trimingham v Associated Newspapers Ltd [2012] EWHC 1296)

  •           “When Article 10 [ECHR] is engaged then the Court must apply an intense focus on the relevant competing rights… . Harassment by speech cases are usually highly fact- and context-specific.” (Canada Goose v Persons Unknown [2019] EWHC 2459)

  •           “The real question is whether the conduct complained of has extra elements of oppression, persistence and unpleasantness and therefore crosses the line… . There may be a further question, which is whether the content of statements can be distinguished from their mode of delivery.” (Merlin Entertainments v Cave [2014] EWHC 3036)

  •           “[P]ublication to the world at large engages the core of the right to freedom of expression. … In the social media context it can be more difficult to distinguish between speech which is “targeted” at an individual and speech that is published to the world at large.” (McNally v Saunders [2021] EWHC 2012)

34.  Harassment under the 1997 Act is thus a highly nuanced concept - less of a bright line rule that can be translated into an algorithm and more of an exercise in balancing different rights and interests against background factual context – something that even the courts do not find easy.

35.  For the harassment offence the task of identifying criminal content on a U2U service is complicated by the central importance of context and repetition. The potential relevance of external context is illustrated by the claimant’s prior published video in the Sandwell Skidder case. A service provider’s systems are unlikely to be aware of relevant external context.

36.  As to repetition, initially lawful conduct may become unlawful as the result of the manner in which it is pursued and its persistence. That is because the harassment offence requires, in the case of conduct in relation to a single person, conduct on at least two occasions in relation to that person. That is a bright line rule. One occasion is not enough.

37.   It would seem, therefore, to be logically impossible for a proactive moderation system to detect a single post and validly determine that it amounts to criminal harassment, or even that there are reasonable grounds to believe that it does. The system would have to have detected and considered together, or perhaps inferred the existence of, more than one harassing post.

38.  The court in the Sandwell Skidder case devoted 92 paragraphs of judgment to describing the facts, the law, and weighing up whether the Sandwell Skidder’s posts amounted to harassment under the 1997 Act. That luxury would not be available to the proactive detection and moderation systems apparently envisaged by the draft Bill, at least to the extent that - unlike a court - they would have to operate at scale and in real or near-real time.

Reactive illegality duty: S.9(3)(d)

39.  The reactive duty on the service provider under S.9(3)(d) is to have proportionate systems and processes in place designed to: “where [it] is alerted by a person to the presence of any illegal content, or becomes aware of it in any other way, swiftly take down such content”.

40.  Let us assume that the service provider’s proactive illegality systems and processes have not already suppressed references to our citizen journalist’s blogposts. Suppose that, instead of taking a harassment complaint to court, the subject of the blogposts complains to the service provider. What happens then?

41.   In terms of knowledge and understanding of the law of criminal harassment, nothing differs from the proactive duties. From a factual perspective, the complainant may well have provided the service provider with more context as seen from the complainant’s perspective.

42.  As with the proactive duties, the threshold that triggers the reactive takedown duty is not awareness that the content is actually illegal. If there are reasonable grounds to believe that use or dissemination of the content amounts to a relevant criminal offence, the service provider is positively obliged to have a system or process designed to take it down swiftly.

43.  At the same time, however, it is required to have regard to the importance of freedom of expression within the law (and, if a Category 1 service provider, to take into account the importance of journalistic content and content of democratic importance).

44.  Apart from the reduced threshold for illegality the exercise demanded of a service provider at this point is essentially that of a court. The fact that the service provider might not be sanctioned by the regulator for coming to an individual decision which the regulator did not agree with (see here) does not detract from the essentially judicial role that the draft Bill would impose on the service provider. 

E.     Implementing the ‘content harmful to adults’ safety duty

45.  Category 1 services would be under a safety duty in respect of ‘content harmful to adults’.

46.  What is ‘content harmful to adults’? It comes in two versions: priority and non-priority. The Secretary of State is able (under a peculiar regulation-making power that on the face of it is not limited to physical or psychological harm) to designate harassing content (whether or not illegal) as priority content harmful to adults.

47.  Content is non-priority content harmful to adults if its nature is such that “there is a material risk of the content having, or indirectly having, a significant adverse physical or psychological impact on an adult of ordinary sensibilities”.  A series of sub-definitions drills down to characteristics and sensibilities of groups of people, and then to those of known individuals. Non-priority content harmful to adults cannot also be illegal content (S.46(8)(a)).

48.  Whether the content be priority or non-priority, the Category 1 service provider has to explain clearly and accessibly in its terms and conditions how it would deal with actual content of that kind; and then apply those terms and conditions consistently (S.11).

49.  As already mentioned (para 23), the extent of the ‘content harmful to adults’ duty is debatable. ‘How’ could imply that such content should be dealt with in some way. The government’s intention is understood to be that the duty is transparency-only, so that the service provider is free to state in its terms and conditions that it does nothing about such content.

50.  Even on that basis, the practical question arises of how general or specific the descriptions of harmful content in the terms and conditions have to be. Priority content could probably be addressed at the generic level of kinds of priority content designated in secondary legislation. Whether our hypothetical blogpost would fall within any of those categories would depend on how harassing content had been described in secondary legislation – for instance, whether a course of conduct was stipulated, as with the criminal offence.

51.   The question of level of generality is much less easy to answer for non-priority content. For instance, the element of the ‘non-priority content harmful to adults’ definition that concerns known adults appears to have no discernible function in the draft Bill unless it in some way affects the Category 1 service provider’s ‘terms and conditions’ duty. Yet if it does have an effect of that kind, it is difficult to see what that could be intended to be.

52.  The fictional character of the “adult of ordinary sensibilities” (see here for a detailed discussion of this concept and its antecedents) sets out initially to define an objective standard for adverse psychological impact (albeit the sub-definitions progressively move away from that). An objective standard aims to address the problem of someone subjectively claiming to have suffered harm from reading or viewing material. That carries the risk of embedding the sensitivities of the most easily offended reader.

53.  For non-priority content harmful to adults, the S.11 duty kicks in if harassing content has been identified as a risk in the “adult’s risk assessment” that a Category 1 service provider is required to undertake. As with illegal content, content harmful to adults includes content hypothetically present on the system.

54.  This relationship creates the conundrum that the higher the level of abstraction at which the adults’ risk assessment is conducted, the greater the gap that has to bridged when translating to actual content; alternatively, if risk assessment is conducted at a more granular and concrete level, for instance down to known content sources and known individuals who are the subject of online content, it could rapidly multiply into unfeasibility.

55.   So, what happens if the Category 1 service provider is aware of a specific blog, or of specific content contained in a blog, or of a specific person who is the subject of posts in the blog, that has been posted to its service? Would that affect how it had to fulfil its duties in relation to content harmful to adults?

56.  Take first a known blog and consider the service provider’s transparency duty. Does the service provider have to explain in its terms and conditions how content from individually identified user sources is to be dealt with? On the face of it that would appear to be a strange result. However, the transparency duty and its underlying risk assessment duty are framed by means of an uneasy combination of references to ‘kind’ of content and ‘content’, which leaves the intended levels of generality or granularity difficult to discern.

57.   The obvious response to this kind of issue may be that a service provider is required only to put in place proportionate systems and processes. That, however, provides no clear answer to the concrete question that the service provider would face: do I have to name any specific content sources in my terms and conditions and explain how they will be dealt with; if so, how do I decide which?

58.  Turning now to a known subject of a blog, unlike for known content sources the draft Bill contains some specific, potentially relevant, provisions. It expressly provides that where the service provider knows of a particular adult who is the subject of user content on its service, or to whom it knows that such content is directed, it is that adult’s sensibilities and characteristics that are relevant. The legal fiction of the objective adult of ordinary sensibilities is replaced by the actual subject of the blogpost.

59.  So in the case of our hypothetical blog, once the council officer complains to the service provider, the service provider knows of the complainant’s identity and also, crucially, knows of the assertion that they have suffered psychological harm as a result of the content on their service.

60.  The service provider’s duty is triggered not by establishing actual psychological harm, but by reasonable grounds to believe that there is a material risk of the content having a significant adverse physical or psychological impact. Let us assume that the service provider has concluded that its ‘harmful to adults’ duty is at least arguably triggered. What does the service provider have to do?

61.   As with a known blog or blogpost, focusing the duty to the level of a known person raises the question: does the service provider have to state in its terms and conditions how posts about, or directed at, that named person will be dealt with? Does it have to incorporate a list of such known persons in its terms and conditions? It is hard to believe that that is the government’s intention. Yet combining the Category 1 safety duty under S.11(2)(b) with the individualised version of the 'adult of ordinary sensibilities' appears to lean in that direction.

62.  If that is not the consequence, and if the Category 1 duty in relation to content harmful to adults is ‘transparency-only’, then how (if at all) would the ‘known person’ provision of the draft Bill affect what the service provider is required to do? What function does it perform? If the ‘known person’ provision does have some kind of substantive consequence, what might that be? That may raise the question whether someone who claims to be at risk of significant adverse psychological impact from the activities of a blogger could exercise some degree of personal veto or some other kind of control over dissemination of the posts.

63.  Whatever the answer may be to the difficult questions that the draft Bill poses, what it evidently does do is propel service providers into a more central role in determining controversies: all in scope service providers where a decision has to be made as to whether there are reasonable grounds to believe that the content is illegal, or presents a material risk of serious adverse psychological impact on an under-18; and Category 1 service providers additionally for content harmful to adults.