Wednesday, 16 June 2021

Carved out or carved up? The draft Online Safety Bill and the press

When he announced the Online Harms White Paper in April 2019 the then Culture Secretary, Jeremy Wright QC, was at pains to reassure the press that the proposed regulatory regime would not impinge on press freedom. He wrote in a letter to the Society of Editors:

“where these services are already well regulated, as IPSO and IMPRESS do regarding their members' moderated comment sections, we will not duplicate those efforts. Journalistic or editorial content will not be affected by the regulatory framework.”

The last sentence, at any rate, always seemed like an impossible promise to fulfil. The government’s subsequent attempts to live up to it have resulted in some of the more inscrutable elements of the draft Online Safety Bill. 

Carve-out for news publisher content

It is true that ‘news publisher content’ is carved out of the safety duties that would be imposed on user to user and search services.  The exemption is intended to address the problem that a news publisher’s feed on, for instance, a social media site would constitute user generated content. As such, without an exemption it would be directly affected by the social media platform’s own duty of care and indirectly regulated by Ofcom.

However, a promise not to affect journalistic or editorial content goes further than that. First, the commitment is not limited to broadcasters or newspapers regulated by IPSO or IMPRESS.  Second, as we shall see, a regulatory framework may still have an indirect effect on content even if the content is carved out of the framework.

Furthermore, even trying to exclude direct effect gives rise to a problem. If you want to carve out the press, how do you do so without giving the government (or Ofcom) power to decide who does and does not qualify as the press? If a state organ draws that line, isn’t the resulting official list in itself an exercise in press regulation? We shall see how the draft Bill has tried to solve this conundrum.

Beneath the surface of the draft Bill lurks a foundational challenge. Its underlying premise is that speech is potentially dangerous, and those that facilitate it must take precautionary steps to mitigate the danger. That is the antithesis of the traditional principle that, within boundaries set by clear and precise laws, we are free to speak as we wish. The mainstream press may comfort themselves that this novel approach to speech is (for the moment) being applied only to the evil internet and to the unedited individual speech of social media users; but it is an unwelcome concept to see take root if you have spent centuries arguing that freedom of expression is not a fundamental risk, but a fundamental right.

Even the most voluble press advocates of imposing a duty of care on internet platforms have offered what seems a slightly muted welcome to these aspects of the draft Bill. Lord Black, in the House of Lords on 18 May 2021, (after declaring his interest as deputy chairman of the Telegraph Media Group) said:

“The draft Bill includes a robust and comprehensive exemption for news publishers from its framework of statutory regulation … . That is absolutely right. During pre-legislative scrutiny of the Bill, we must ensure that this exemption is both watertight and practical so that news publishers are not subject to any form of statutory control, and that there is no scope for the platforms to censor legitimate content.”

One might ask what constitutes ‘legitimate’ content and who - if not the platforms – would decide. Ofcom? At any rate the draft Bill will disappoint anyone hoping for a duty of care regime that could not have any effect at all on news publisher content. It is difficult to see how things could be otherwise, the former Culture Secretary’s promise notwithstanding.

The draft Bill

Now we can embark on a tour of the draft Bill’s attempts to square the circle of delivering on the former Secretary of State’s promise. First, a diagram.

Got that? Probably not.

So let us conduct a point by point examination of how the draft Bill tries to exclude the press from its regulatory ambit, and consider how far it succeeds. The News Media Association’s submission to the White Paper consultation, to which I will refer, contained a list of what the NMA thought the legislation should do in order to carve out the press. Unsurprisingly, the draft Bill falls short.

But first, a note on terminology: it is easy to slip into using ‘platforms’ to describe those organisations in scope. We immediately think of Facebook, Twitter, YouTube, TikTok, Instagram and the rest. But it is not only about them: the government estimates that 24,000 companies and organisations will be in scope. That is everyone from the largest players to an MP’s discussion app, via Mumsnet and the local sports club discussion forum. So, in an effort not to lose sight of who is in scope, I shall adopt the dismally anodyne ‘U2U provider’.

Moderated comments sections

The first limb of the Secretary of State’s commitment was to avoid duplicating existing regulation of moderated comments sections on newspapers’ own websites. That has been achieved not by a press-specific exemption, but through the draft Bill’s general exclusion of low risk ‘limited functionality’ services. This provision exempts services in which users are able to communicate only in the following ways: posting comments or reviews relating to content produced or published by the provider of the service (or by a person acting on behalf of the provider), and in various specified related ways (such as ‘like’ or ‘dislike’ buttons).

This exemption as drafted has problems, since technically (even if not contractually) a user is able to post anything to a non-proactively moderated free text review section. That could comments on comments – a degree of freedom which of itself appears to be disqualifying - even if the intended purpose is that the facility should be used only for reviewing the provider’s own content.

As for the protection that the exemption tangentially offers to comments sections on press websites, it is notable that it can be repealed or amended by secondary legislation, if the Secretary of State considers that to be appropriate because of the risk of physical or psychological harm to individuals in the UK presented by a service of the description in question.

News publisher content – what is it?

News publisher content present on a service is exempted from the service provider’s safety duties. There are two primary categories of news publisher content: that generated by UK-regulated broadcasters and that generated by other recognised news publishers. The latter have to meet a number of qualifying conditions, both administrative and substantive.

Administrative conditions

Administratively, a putative recognised news publisher must:

    (a)   Be an entity (i.e. an incorporated or unincorporated body or association of persons or an organisation)  

    (b) have a registered office or other business address in the UK

    (c) be the person with legal responsibility for material published by it in the UK

    (d) publish (by any means including broadcasting) the name address, and registered number    (if any) of the entity; and publish the name and address (and where relevant, registered or principal office and registered number) of any person who controls the entity (control meaning the same as in the Broadcasting Act).

Failure to meet any of these conditions would be fatal to an argument that the entity’s output qualified as news publisher content.

Organisations proscribed under the Terrorism Act 2000, or the purpose of which is to support a proscribed organisation, are expressly excluded from the news publisher exemption.

Substantive conditions

Substantively, the entity must:

    (a) Have as its principal purpose the publication of news-related material, such material being created by different persons and being subject to editorial control.

    (b) Publish such material in the course of a business (whether or not carried on with a view to profit)

    (c) Be subject to a standards code (one published either by an independent regulator or by the entity itself)

    (d) Have policies and procedures for handling and resolving complaints.

Again, failure to meet any of these conditions would be fatal.

‘News-related material’ has the same definition as in the Crime and Courts Act 2003:

    (a) News or information about current affairs

    (b) Opinion about matters relating to the news or current affairs; or

    (c) Gossip about celebrities, other public figures or other persons in the news.

News-related material is ‘subject to editorial control’ if there is a person (whether or not the publisher of the material) who has editorial or equivalent responsibility for the material, including responsibility for how it is presented and the decision to publish it.

Reposted news publisher material

The draft Bill also contains limited exemptions for news publisher content reposted by other users. To qualify, the material must be uploaded to or shared on the service by a user of the service, and:

    (a) Reproduce in full an article or written item originally published by a recognised news publisher (but not be a screenshot or photograph of that article or item or of part of it);

    (b) Be a recording of an item originally broadcast by a recognised news publisher (but not be an excerpt of such a recording); or

    (c) Be a link to a full article or written item originally published, or to a full recording of an item originally broadcast, by a recognised news publisher.

What isn’t exempted?

What news-related content would fall outside the exemptions from the U2U provider’s safety duties? Some of the most relevant are:

  • The user reposting exemption does not apply to quotations, snippets, excerpts, screenshots and the like.
  • Content from non-UK news publishers will not be exempt unless they are able to jump through the administrative and substantive hoops described above.  The requirement to have a registered office or other business address in the UK would itself seem likely to exclude the vast majority of non-UK news providers.
  • Individual journalist accounts. Many well known broadcast and news journalists have their own Twitter or other social media accounts and make use of them prolifically to report on current news. These are outside the primary exemption, since an individual journalist is not a recognised news publisher. (Some of what individual journalists do would, of course, fall within the re-posting exemption.) The NMA argued that the exemption must apply to “the news publishers, corporately and individually to all their workforce and contributors”.

One opaque aspect of the exemption is what is meant by content “generated” by a recognised news publisher. If a newspaper publishes a story incorporating an embedded link to a TikTok video (as the Daily Mail did recently with the video from a migrant boat crossing the Channel), is the link part of the content generated by the news publisher? If so, is it anomalous that the story – including the embedded video - on the news publisher’s own site, subsequently posted to (say) Twitter, is exempt from Twitter’s safety duty, yet the same video originally posted on TikTok is still within scope of TikTok’s safety duty?

The example of amateur video uploaded from a migrant boat brings us neatly to the topic of citizen journalism. Citizen journalism is within scope of U2U providers’ safety duties and, for ordinary U2U providers, enjoys no special status over and above any other user generated content. 

Large players (Category 1 providers) will have a variety of freedom of expression duties imposed on them, applicable to UK-linked news publisher content or journalistic content, as well some duties in respect of so-called content of democratic importance. The duties will include, for instance, an obligation to specify in terms and conditions by what method journalistic content is to be identified. Since the draft Bill says only that journalistic content is content ‘generated for the purposes of journalism’, identifying such content looks like a tall order.

The journalistic content provisions are likely to run into criticism from opposing ends: on the one hand that some users will rely on them as a smokescreen to protect what is in reality non-journalistic material; and that on the other hand, the concept is too vague to be of real use, so in practice hands the decision on how to categorise to Ofcom. 

What is the significance of news publisher content being exempted?

The news publisher content exemption means that U2U providers do not have a safety duty for news publisher content. In other words, they are not obliged to include news publisher content in the various steps that they are required to take to fulfil their safety duties.

That does not mean that news publisher content could not be affected as a by-product of U2U providers' attempts to discharge their safety duties over other user content. U2U providers not being required proactively to monitor and inhibit news publisher content doesn’t mean that such content couldn’t be caught up in a provider’s efforts to do that for user generated content generally.

Lord Black spoke of precluding any scope for platforms to censor legitimate content. The closest the draft Bill’s general provisions come is the duty 'to have regard to the importance of freedom of expression’. For Category 1 providers the focus is additionally on dedicated, expedited complaints procedures and transparency of terms and conditions. 

The Impact Assessment concludes, under Freedom of Expression, that the regulatory model’s focus on transparency and user reporting and redress should lead to “some improvements” in users’ ability to appeal content removal and get this reinstated, “with a positive impact on freedom of expression”.

The Policy Risks table annexed to the Impact Assessment goes into more detail:



Regulation disproportionately impacts on freedom of expression, by incentivising or requiring content takedown.

The approach has built in appropriate safeguards to ensure protections for freedom of expression, including:

● Differentiated approach of legal/illegal content, e.g. not requiring takedown of legal but harmful content

● Safeguards for journalistic content

● Effective transparency reporting

● Proportionate enforcement sanctions to avoid incentivising takedowns

● User redress mechanisms will enable challenge to takedown

● Super-complaints will allow organisations to lodge complaints where they may be concerned about disproportionate impacts

● Regulator has a duty to consider freedom of expression


The Impact Assessment summarises the government’s final policy position thus:

“There will … be strong safeguards in place to ensure media freedom is upheld. Content and articles published by news media on their own sites will not be considered user generated content and thus will be out of regulatory scope.

Legislation will also include robust protections for journalistic content on in-scope services. Firstly, the legislation will provide a clear exemption for news publishers’ content. This means platforms will not have any new legal duties for these publishers’ content as a result of our legislation. Secondly, the legislation will oblige Category 1 companies to put in place safeguards for all journalistic content shared on their platforms. The safeguards will ensure that platforms consider the importance of journalism when undertaking content moderation, and can be held to account for the removal of journalistic content, including with respect to automated moderation tools.”

At the moment it is anyone’s guess what the various duties would mean when crystallised into practical requirements – a vice ingrained throughout the draft Bill. We will know only when Ofcom, however many years down the line, produces its series of safety Codes of Practice for the various different kinds of U2U service. A U2U provider would (unless it decides to take the brave route of claiming compliance with the safety duties in ways other than those set out in a Code of Practice) have to comply with whatever the applicable Code of Practice may say about freedom of expression.

If Ofcom were to go down the route of suggesting in a Code of Practice that news publisher content should be walled off from being indirectly affected by implementation of the providers’ safety duties, how could that be achieved? The spectre of an Ofcom-approved list of news publisher content providers rears its head again.

Even if there were such a list, how would such content be identified and separated out in practice? The NMA consultation submission suggested a system of ‘kite marking’. IT engineers could still be trying to build tagging systems to make that work in ten years’ time.

The government’s draft Online Safety Bill announcement claimed that the measures required of ordinary and large providers would “remove the risk that online companies adopt restrictive measures or over-remove content in their efforts to meet their new online safety duties.” (emphasis added)

This bold statement – in contrast with the more modest claim in the Impact Assessment - shows every sign of being another unfulfillable promise, whether for news publisher content or user-generated content generally.

Lord Black said in the Lords debate:

“We have the opportunity with this legislation to lead the world in ensuring proper regulation of news content on the internet, and to show how that can be reconciled with protecting free speech and freedom of expression. It is an opportunity we should seize.”

It can be no real surprise that a solution to squaring that circle is as elusive now as when the Secretary of State wrote to the Society of Editors two years ago. It has every prospect of remaining so.

Tuesday, 8 June 2021

Big Brother Watch/Rättvisa – a multifactorial puzzle

The European Court of Human Rights Grand Chamber has now delivered its long awaited judgment in Big Brother Watch.  It always seemed a bit of a stretch that the Strasbourg Court would tell the UK to close down the bulk (so to speak) of GCHQ’s operations, especially since 15 years ago the Weber/Saravia decision had accepted the principle of bulk communications surveillance (albeit in a world in which digital communications were not yet ubiquitous). 

So it proved. The Court’s Big Brother Watch judgment (and its companion judgment in the Swedish Centrum för Rättvisa case) lay down a revised set of fundamental rights criteria by which to assess bulk surveillance regimes, but do not forbid them as such.

The Grand Chamber’s approach

The twin judgments are notable for advancing further down the path of assessing a surveillance regime not by drawing red lines that must not be crossed, but by applying a multifactorial evaluation of criteria that feed into a “global assessment” of the regime's compliance with the “provided by law” and “necessary in a democratic society” requirements of the Convention.

The “provided by law” Convention requirement is that a measure must have some basis in law, and also have the quality of law: be publicly accessible and sufficiently certain and precise so as to be foreseeable in its effects. The scope of any discretion to exercise a surveillance power must be indicated with sufficient clarity to provide adequate protection against arbitrary interference.  

The conundrum that faces a human rights court is how such traditional rule of law requirements – certainty of law, foreseeability of legal effects, accessibility of a legal regime – can be applied to the inherently secret and discretionary nature of communications surveillance. The answer has been to import the notion that safeguards (such as independent oversight) can compensate for lack of openness, so long as the kind of circumstances in which communications surveillance may take place are clearly set out in legislation, supplemented if necessary by instruments such as codes of practice. The ECtHR’s particular focus on the role of safeguards is facilitated by its policy of considering the “provided by law” test jointly with whether the interference constituted by a given regime is “necessary in a democratic society” (BBW [334], Rättvisa [248]).

It is not a straightforward task to decide at what point safeguards sufficiently compensate for the rule of law deficiencies presented by secret exercise of a discretionary power. The Grand Chamber describes the role of safeguards in bulk interception of digital communications as “pivotal and yet elusive” (BBW [322], Rättvisa [236]). 

It is hard to avoid the conclusion that the search for this will o’the wisp is ultimately a matter of impression – the more so, the further the evaluation strays from red lines that cannot be crossed towards an overall multifactorial assessment, the result of which depends on how much weight the court chooses to give to each factor.

Bulk interception not per se unlawful

The challenge that faces a party seeking to strike down a bulk interception regime is how to bring a substantive objection – that a bulk communications surveillance regime is inherently repugnant - within the framework of a “quality of law” and “necessity” challenge. The argument will be that the interference with privacy and (perhaps) freedom of expression entailed by bulk communications interception is so great that, although useful, bulk communications interception does not pass the “necessity” test. This is the kind of argument that succeeded in the Marper case on blanket retention of DNA, fingerprint and cellular samples.

In the BBW and Rättvisa  cases the Grand Chamber held that a decision to operate a bulk interception regime continues to fall within the competence (“margin of appreciation”) of a Contracting State.  Their freedom of choice in how to operate such a regime is, however, more constrained. (BBW [340, 347], Rättvisa [254, 261])

Another way of stating the objection to such a regime might be that, given the scale of the interference, no amount of safeguards can compensate for the lack of foreseeability inherent in the secret exercise of bulk communications surveillance powers. However, in reality once necessity is surmounted in principle, the examination moves on to whether the combination of accessibility, precision of rules and compensating safeguards embodied in the regime under challenge is sufficient for Convention compliance.

The Court’s decision on RIPA

In BBW the UK’s now superseded RIPA (Regulation of Investigatory Powers Act 2000) regime was under challenge. As in the Chamber judgment in 2018 the Grand Chamber found the UK regime wanting. But it did so in slightly different ways:


Grand Chamber

Article 8


Bulk interception: lack of provision for sufficient oversight of the entire selection process, specifically search criteria and selectors [387, 388]

Lack of independent authorisation at the outset [377]


Lack of provision for oversight of categories of selectors at point of authorisation; lack of provision for enhanced safeguards for use of strong selectors linked to identifiable individuals [383]


Insufficiently precise nature of SoS certificate as to descriptions of material necessary to be examined [386, 387, 391]


All applicable to both content and RCD [416]

Bulk interception: examination of related communications data (RCD) exempted from all safeguards applicable to content, such as S.16(2) ‘British Islands’ restriction applicable to content. [357, 387, 388]

Lack of ‘British Islands’ restriction for RCD is not decisive in overall assessment [421]; different storage periods for RCD (“several months”) were not evident in the Interception Code. Should be included in legislative and/or other general measures [423]

Communications data acquisition: Violation of EU law meant that acquisition could not be in accordance with the law [467, 468]

Not contested [521, 522]

Article 10


Bulk interception: lack of protection for journalistic privilege at selection and examination stage (content and RCD) [493, 495, 500]

As per Art 8; additionally, no requirement for a judge or similar to decide whether use of selectors or search terms known to be connected to a journalist was justified by an overriding requirement in the public interest; or whether a less intrusive measure might have sufficed [456];


Nor provision for similar authorisation of continued storage and examination of confidential journalistic material once a connection to a journalist became known. [457]

Communications data acquisition: insufficiently broad journalistic privilege protections [499, 500]

Not contested [527, 528]

The main concrete point of difference from the Chamber judgment is probably the Grand Chamber's emphasis on prior independent authorisation. That, in the form of Judicial Commissioner approval of the Secretary of State’s decision to issue a warrant, is now a feature of the Investigatory Powers Act 2016 which has superseded RIPA.

It is difficult to predict specific implications of the two Grand Chamber judgments for the IP Act. This is due to the Court’s already noted holistic, multifactorial approach to fundamental rights compliance. Although in places the Grand Chamber speaks of ‘minimum requirements’ – which might suggest a cumulative set of threshold conditions – in others it speaks of ‘shortcomings’ that inform the overall assessment and may be compensated for by other features of the regime.

This approach is more prominent in the Rättvisa judgment, in which the Court held that while certain safeguards did compensate for identified shortcomings in the Swedish regime, they did not do so sufficiently. The BBW judgment, while also adopting the “global assessment” approach, is in substance a starker exercise in striking down the RIPA regime owing to lack of certain safeguards. 

The main reason for the difference between the two judgments is that the Swedish surveillance regime did provide for initial authorisation of bulk warrants by an independent Foreign Intelligence Court. It could not, therefore, be said (as it was for RIPA in BBW) that the regime lacked independent authorisation at the outset (a minimum requirement that the Court has now described as a “fundamental safeguard” that “should” be present ([377]).  The approach of the Court in Rättvisa was therefore of necessity more nuanced.

Hard versus soft limits

By contrast with the Grand Chamber’s holistic, multifactorial approach, the EU Court of Justice has moved in the direction of insisting on that the relevant legal instruments set out clear and precise hard limits on powers.

That contrast may to some extent reflect the different roles of the two courts. The CJEU’s task is to lay down the content of substantive, positive EU law, within the framework of the Charter of Fundamental Rights. The task of the ECtHR is not to harmonise or lay down positive law (although when it ventures into the territory of horizontal rights it comes perilously close to doing that), but to determine whether a potentially wide variety of  Contracting State laws has strayed beyond the boundaries of Convention compatibility.

Although even the CJEU must allow for some differences in Member State domestic laws, it is in principle able to be more prescriptive than the ECtHR. 

At any rate, the ECtHR (confirmed by the Grand Chamber in the BBW and Rättvisa cases) has taken a softer-edged approach, with greater stress on safeguards than on the need for clear and precise limits on powers (emphasised by the CJEU most recently in Privacy International/La Quadrature). Whether or not that ultimately means a substantively stricter outcome than the CJEU's approach, it certainly makes for one that is less predictable in terms of compliance with the Convention.

The ECtHR’s approach is exemplified by the set of compliance criteria articulated by the Grand Chamber in BBW and Rättvisa. It has laid down eight minimum criteria, compared with the six in Weber/Saravia, to be considered in deciding whether a surveillance regime passes the initial ‘in accordance with the law’ test.

The criteria are that the Court will examine whether the domestic framework clearly defines:

1. the grounds on which bulk interception may be authorised;

2. the circumstances in which an individual’s communications may be intercepted;

3. the procedure to be followed for granting authorisation;

4. the procedures to be followed for selecting, examining and using intercept material;

5. the precautions to be taken when communicating the material to other parties;

6. the limits on the duration of interception, the storage of intercept material and the circumstances in which such material must be erased and destroyed;

7. the procedures and modalities for supervision by an independent authority of compliance with the above safeguards and its powers to address non-compliance;

8. the procedures for independent ex post facto review of such compliance and the powers vested in the competent body in addressing instances of non-compliance.

These are framed as topic areas that have to be clearly addressed in domestic law. They also imply some degree of minimum requirement: for instance, domestic legislation that addressed the topic of limits on the duration of interception by stating clearly that it may be unlimited would not pass muster. Similarly, the factors connote some level of independent supervision and review.

However, what those implied minimum requirements might amount to in practice is not easy to tell. The eight topics appear to be as much – perhaps more so - criteria to be assessed, as a cumulative set of threshold conditions to be surmounted.  They may have elements of both. The Court referred in its judgment to its ‘overall assessment’ of the bulk interception regime, emphasising that shortcomings in some areas may be compensated by safeguards in others. The Court may also take into account factors beyond the eight minimum criteria, such as notification provisions.

In a separate Opinion Judge Pinto de Albuquerque pointed out the ambiguity in the Grand Chamber’s judgment as to whether it was laying down factors to be considered or mandatory requirements:

“On the one hand, it has used imperative language (“should be made”, “should be subject”, “should be authorised”, “should be informed”, “must be justified”, and “should be scrupulously recorded”, “should also be subject”, “it is imperative that the remedy should”) and has called them “fundamental safeguards” and even “minimum safeguards”. But on the other hand, it has diluted these safeguards in “a global assessment of the operation of the regime”, allowing for a trade-off among the safeguards. It seems that at the end of the day each individual safeguard is not mandatory, and the prescriptive language of the Court does not really correspond to non-negotiable features of the domestic system.”

That said, the Court went on to lay down what it described as the “fundamental safeguards” that would be the cornerstone of an Article 8-compliant bulk interception regime ([350]). This was articulated in the context of the particular model presented to the court (collection, filtering to discard unwanted material, automated application of selectors and search queries, manual queries by analysts, examination by analysts, subsequent retention and use), which the Court regarded as involving increasing interferences with privacy as the process progressed. ([325]) . This model already feels somewhat old-fashioned, given the more sophisticated pattern-matching and other techniques that could be applied to analysis of, in particular, bulk communications data.  

The Court's requirements are that the process must be subject to end-to-end safeguards, meaning that: 

  • At each stage of the process an assessment must be made of the necessity and proportionality of the measures being taken. [350]

  • Bulk interception should be subject to independent authorisation at the outset, when the object and scope of the operation are being defined [351]

  • The operation should be subject to supervision and independent ex post facto review [350]

The Court commented that the importance of supervision and review is amplified compared with targeted interception because of the inherent risk of abuse and the legitimate need for secrecy [349].

Drilling down further into those fundamental safeguards, the Court observed that:

  • The independent authorising body should be informed of both the purpose of the interception and the bearers or communication routes likely to be intercepted. [352]
  • Given that the choice of selectors and query terms determines which communications will be eligible for examination by an analyst, the authorisation should at the very least identify the types or categories of selectors to be used. The Court accepted that the inclusion of all selectors in the authorisation may not be feasible in practice. [354]
  • Enhanced safeguards should be in place for strong selectors linked to identifiable individuals. The use of every such selector must be justified by the intelligence services and that justification should be scrupulously recorded and be subject to a process of prior internal authorisation providing for separate and objective verification of whether the justification conforms to the principles of necessity and proportionality. [355]
  • Each stage of the bulk interception process – including the initial authorisation and any subsequent renewals, the selection of bearers, the choice and application of selectors and query terms, and the use, storage, onward transmission and deletion of the intercept material – should be subject to supervision by an independent authority. That supervision should be sufficiently robust to keep the interference with Art 8 rights to what is “necessary in a democratic society”. In order to facilitate supervision, detailed records should be kept by the intelligence services at each stage of the process. [356]
  • Finally, an effective remedy should be available to anyone who suspects that his or her communications have been intercepted by the intelligence services, either to challenge the lawfulness of the suspected interception or the Convention compliance of the interception regime. A remedy that does not depend on notification to the interception subject can be effective. But it is then imperative that the remedy should be before a body which, while not necessarily judicial, is independent of the executive and ensures the fairness of the proceedings, offering, in so far as possible, an adversarial process. The decisions of such authority shall be reasoned and legally binding with regard, inter alia, to the cessation of unlawful interception and the destruction of unlawfully obtained and/or stored intercept material. [357]

The court also provided guidance on sharing intercept material with agencies in other countries.

In the light of the above, the Court will determine whether a bulk interception regime is Convention compliant by conducting a global assessment of the operation of the regime. Such assessment will focus primarily on whether the domestic legal framework contains sufficient guarantees against abuse, and whether the process is subject to “end-to-end safeguards”. In doing so, the Court will have regard to the actual operation of the system of interception, including the checks and balances on the exercise of power, and the existence or absence of any evidence of actual abuse. [360]

The Court also observed that it was not persuaded that the acquisition of related communications data through bulk interception is necessarily less intrusive than the acquisition of content. It therefore considered that the interception, retention and searching of related communications data should be analysed by reference to the same safeguards as those applicable to content. [363]

That said, the Court observed that while the interception of related communications data would normally be authorised at the same time the interception of content is authorised, once obtained they could permissibly be treated differently by the intelligence services. 

In view of the different character of related communications data and the different ways in which they are used by the intelligence services, as long as the aforementioned safeguards were in place, the legal provisions governing their treatment did not necessarily have to be identical in every respect to those governing the treatment of content. [364]

Implications for the Investigatory Powers Act 2016

Where does this leave the 2016 Act? The Act ticks several important boxes, notably the “double lock” system of approval of bulk warrants by a Judicial Commissioner introduced after the end of the RIPA regime.

When considering the Convention compliance of the IP Act regime the Rättvisa decision is probably more factually relevant than the BBW decision, since it addresses a regime that featured initial authorisation by an independent court.

The IP Act in some respects provides stronger safeguards than those that fell short in Rättvisa – thus the UK IPT was held up as an example of what was possible in the area of ex post facto review.

On the other hand, the Swedish regime provided for mandatory presence of a privacy protection representative at Foreign Intelligence Court sessions. That was identified as a relevant safeguard to be weighed against the fact that the Court had never held a public hearing and that all its decisions were confidential.

There is no provision in the IP Act for a privacy protection representative to make submissions in the bulk warrant approval process. As to publicising bulk warrant approval decisions, in his April 2018 Advisory Notice the Investigatory Powers Commissioner said:

“The Judicial Commissioners will consider making any decisions on approvals public, subject to any statutory limitations and necessary redactions.”

It is noteworthy that the latest Annual Report of the Investigatory Powers Commissioner (for 2019) records that a Judicial Commissioner issued the first approvals of a communications data retention notice regarding internet connection records. It also describes a potential obstacle to approval of warrants posed by MI5's IT issues. Whilst this evinces a degree of openness, it does not go as far as (for instance) a practice of publishing Judicial Commissioner decisions on points of legal interpretation.

Given the multifactorial, trade-off-oriented approach of the Grand Chamber it is impossible to be categoric about whether this aspect of the IP Act regime presents Convention compliance problems. On the basis of Rättvisa we can expect, however, that it will be argued that either a privacy (and freedom of expression?) representative should be able to make submissions in the bulk warrant approval decision-making process, or the possibility of publishing elements of bulk warrant approval decisions should be explored further, or perhaps both.

As for the double-lock procedure itself, although the Secretary of State remains the primary decision-maker, and it is occasionally suggested that Judicial Commissioner approval, being based on judicial review principles, falls short of full scrutiny, it should not be forgotten that the Advisory Notice issued by the IPC in April 2018 stated that the Judicial Commissioners would not apply the relatively hands-off ‘Wednesbury reasonableness’ test, but instead the judicial review test applied by the domestic courts when considering interferences with fundamental rights. That would be taken into account in any assessment of the level of scrutiny applied to warrants.

Another area of the IP Act that is likely to attract attention is the IP Act's bulk communications data acquisition warrant. This is the successor to S.94 of the Telecommunications Act 1984, which the government admitted in November 2015 had been used for bulk acquisition of communications data from communications service providers.

Unlike bulk interception under RIPA (and now under the IP Act), the bulk communications acquisition warrant is not focused on foreign intelligence purposes. Given the various references in the BBW and Rättvisa judgments to bulk interception being primarily used for foreign intelligence, and the acknowledgment that bulk communications data should not be regarded as less sensitive than content, the Convention compliance of a domestic bulk acquisition regime may fall to be considered in the future.

A potential problem area, both for bulk interception and communications data acquisition, is journalistic privilege. Although the IP Act contains stronger protections for journalistic material than did RIPA, it may be questioned whether those, at least of themselves, are sufficient to meet the criticisms contained in the two ECtHR judgments.

Returning to the central theme of the Grand Chamber judgments, does the IP Act provide sufficient end-to-end safeguards over the bulk interception process? Following the Chamber judgment in 2018 I suggested that since the 2016 Act did not spell out whether end to end oversight was applied to all stages of the bulk interception process, more would need to be done to fill that gap (remembering that it is not enough for that simply to be done – it must be required to be done by means of clearly stated public rules.) That view is reinforced by the Grand Chamber judgment. I can do no better than repeat what I said then:

“Beyond that, under the IP Act the Judicial Commissioners have to consider at the warrant approval stage the necessity and proportionality of conduct authorised by a bulk warrant. Arguably that includes all four stages identified by the Strasbourg Court (see my submission to IPCO earlier this year). If that is right, the RIPA gap may have been partially filled.

However, the IP Act does not specify in terms that selectors and search criteria have to be reviewed. Moreover, focusing on those particular techniques already seems faintly old-fashioned. The Bulk Powers Review reveals the extent to which more sophisticated analytical techniques such as anomaly detection and pattern analysis are brought to bear on intercepted material, particularly communications data. Robust end to end oversight ought to cover these techniques as well as use of selectors and automated queries. 

The remainder of the gap could perhaps be filled by an explanation of how closely the Judicial Commissioners oversee the various selection, searching and other analytical processes.

Filling this gap may not necessarily require amendment of the IP Act, although it would be preferable if it were set out in black and white. It could perhaps be filled by an IPCO advisory notice: first as to its understanding of the relevant requirements of the Act; and second explaining how that translates into practical oversight, as part of bulk warrant approval or otherwise, of the end to end stages involved in bulk interception (and indeed the other bulk powers).”

The case for the gap to be filled formally is reinforced when we consider that the government has publicly referred to discussions that have been taking place with IPCO to strengthen end to end supervision in practice. The Grand Chamber judgment records the government’s argument that:

“Robust independent oversight of selectors and search criteria was therefore within the IC Commissioner’s powers: by the time of his 2014 report he had specifically put in place systems and processes to make sure that actually occurred, and, following the Chamber judgment, the Government had been working with the IC Commissioner’s Office to ensure that there would be enhanced oversight of selectors and search criteria under IPA.”

In his Annual Report for 2019 (published in December 2020) the Investigatory Powers Commissioner stated:

“Our oversight of bulk powers has evolved over the past year (see para 10.27). This reflected the European Court of Human Right’s judgment in the Big Brother Watch and others v UK case, and the Intelligence and Security Committee’s (ISC) Privacy and Security Report of March 2015.We reviewed our approach to inspecting bulk interception in 2019, considering the technically complex ways in which bulk interception is implemented and from 2020 our inspections will include a detailed examination of selectors and search criteria.”

Now that we have the Grand Chamber judgment the case appears to be stronger for the end to end oversight arrangements, and IPCO’s interpretation of the 2016 Act in that regard, to be spelled out publicly. That would also be well timed for the forthcoming review of the operation of the 2016 Act that is required to start in a year’s time.

Sunday, 16 May 2021

Harm Version 3.0: the draft Online Safety Bill

Two years on from the April 2019 Online Harms White Paper, the government has published its draft Online Safety Bill. It is a hefty beast: 133 pages and 141 sections. It raises a slew of questions, not least around press and journalistic material and the newly-coined “content of democratic importance”. Also, for the first time, the draft Bill spells out how the duty of care regime would apply to search engines, not just to user generated content sharing service providers.

This post offers first impressions of a central issue that started to take final shape in the government’s December 2020 Full Response to consultation: the apparent conflict between imposing content monitoring and removal obligations on the one hand, and the government’s oft-repeated commitment to freedom of expression on the other - now translated into express duties on service providers.

That issue overlaps with a question that has dogged the Online Harms project from the outset: what does it mean by safety and harm?  The answer shapes the potential impact of the legislation on freedom of expression. The broader and vaguer the notion of harm, the greater the subjectivity involved in complying with the duty of care, and the greater the consequent dangers for online users' legitimate speech. 

The draft Bill represents the government's third attempt at defining harm (if we include the White Paper, which set no limit). The scope of harm proposed in its second version (the Full Response) has now been significantly widened

For legal but harmful content the government apparently now means to set an overall backstop limitation of "physical or psychological harm", but whether the draft Bill achieves that is doubtful. In any event that would still be broader than the general definition of harm proposed in the Full Response: a “reasonably foreseeable risk of a significant adverse physical or psychological impact on individuals”. For illegal content the Full Response's general definition would not apply; and the new backstop definition would have only limited relevance.   

Moderation and filtering duties

If one provision can be said to lie at the heart of the draft Bill, it is section 9(3). This describes duties that will apply to all the estimated 24,000 in-scope service providers. It is notable that pre-Brexit, duties (a) to (c) would have fallen foul of the ECommerce Directive's Article 15 ban on imposing general monitoring obligations on hosting providers. Section 9(3) thus departs from 20 years of EU and UK policy aimed at protecting the freedom of expression and privacy of online users.  

Section 9(3) imposes: 

A duty to operate a service using proportionate systems and processes designed to—

(a) minimise the presence of priority illegal content;

(b) minimise the length of time for which priority illegal content is present;

(c) minimise the dissemination of priority illegal content;

(d) where the provider is alerted by a person to the presence of any illegal content, or becomes aware of it in any other way, swiftly take down such content.

Duty (d) approximately parallels the hosting liability shield in the ECommerce Directive, but cast in terms of a positive regulatory obligation to operate take down processes, rather than potential exposure to liability for a user's content should the shield be disapplied on gaining knowledge of its illegality. 

As is typical of regulatory legislation, the draft Bill is not a finished work. It is more like a preliminary drawing intended to be filled out later. For instance, the extent of the proactive moderation and filtering obligations implicit in Section 9(3) depends on what constitutes ‘priority illegal content’. That is not set out in the draft Bill, but would be designated in secondary legislation prepared by the Secretary of State. The same holds for ‘priority content that is harmful to adults’, and for a parallel category relating to children, which underpin other duties in the draft Bill. 

Since Section 9(3) and the other duties make no sense without the various kinds of priority content first being designated, regulations would presumably have to be made before the legislation can come into force.  The breadth of the Secretary of State's discretion in designating priority content is discussed below.

If secondary legislation is a layer of detail applied to the preliminary drawing a further layer, yet more detailed, will consist of codes of practice, guidance and risk profiles for different kinds of service, all issued by Ofcom.

This regulatory vessel would be pointed in the government's desired direction by a statement of strategic online safety priorities issued by the Secretary of State, to which Ofcom would be required to have regard. The statement could set out particular outcomes. The Secretary of State would first have to consult with Ofcom, then lay the draft before Parliament so as to give either House the opportunity to veto it.  

The moderation and filtering obligations implicit in Section 9(3) and elsewhere in the draft Bill would take the lion’s share – £1.7bn  of the £2.1bn that the government’s Impact Assessment reckons in-scope providers will have to spend on complying with the legislation over the first 10 years. Moderation is expected to be both technical and human:

“…it is expected that undertaking additional content moderation (through hiring additional content moderators or using automated moderation) will represent the largest compliance cost faced by in-scope businesses.” (Impact Assessment [166])

Additional moderation costs are expected to be incurred in greater proportion by the largest (Category 1) providers: 7.5% of revenue for Category 1 organisations and 1.9% for all other in-scope organisations (Impact Assessment [180]). That presumably reflects the obligations specific to Category 1 providers in relation to legal but 'harmful to adults’ content.

Collateral damage to legitimate speech

Imposition of moderation and filtering obligations, especially at scale, raises the twin spectres of interference with users’ privacy and collateral damage to legitimate speech. The danger to legitimate speech arises from misidentification of illegal or harmful content and lack of clarity about what is illegal or harmful. Incidence of collateral damage resulting from imposition of such duties is likely to be affected by:

  •     The proof threshold that triggers the duty. The lower the standard to which a service provider has to be satisfied of illegality or harm, the greater the likelihood of erroneous removal or inhibition.
  •     Scale. The greater the scale at which moderation or filtering is carried out, the less feasible it is to take account of individual context. Even for illegal content the assessment of illegality will, for many kinds of illegality, be context-sensitive.
  •     Subjectivity of the harm. If harm depends upon the subjective perception of the reader, a harm standard according to the most easily offended reader may develop.
  •     Vagueness. If the kind of harm is so vaguely defined that no sensible line can be drawn between identification and misidentification, then collateral damage is hard-wired into the regime.
  •     Scope of harm The broader the scope of harm to which a duty applies, the more likely that it will include subjective or vague harms.

Against these criteria, how does the draft Bill score on the collateral damage index?

Proof threshold S.41 defines illegal content as content where the service provider has reasonable grounds to believe that use or dissemination of the content amounts to a relevant criminal offence. Illegal content does not have to be definitely illegal in order for the section 9(3) duties to apply.

The scale of the required moderation and filtering is apparent from the Impact Assessment.

The scope of harm has swung back and forth. Version 1.0 was contained in the government’s April 2019 White Paper. It encompassed the vaguest and most subjective kinds of harm. Most kinds of illegality were within scope. For harmful but legal content there was no limiting definition of harm. Effectively, the proposed regulator (now Ofcom) could have deemed what is and is not harmful.

By the time of its Full Consultation Response in December 2020 the government had come round to the idea that the proposed duty of care should relate only to defined kinds of harm, whether they arose from illegal user content or user content that was legal but harmful [Full Response 2.24].

In this Version 2.0, harm would consist of a “reasonably foreseeable risk of a significant adverse physical or psychological impact on individuals”. A criminal offence would therefore be in scope of a provider’s duty of care only if the offence presented that kind of risk.

Although still problematic in retaining some elements of subjectivity through inclusion of psychological impact, the Full Response proposal significantly shifted the focus of the duty of care towards personal safety properly so-called. It was thus more closely aligned to the subject matter of comparable offline duties of care.

Version 3.0 The draft Bill states as a general definition that "harm" means "physical or psychological harm".  This is an attenuated version of the general definition proposed in the Full Response. However, the draft Bill does not stipulate that 'harmful' should be understood in the same limited way. The result of that omission, combined with other definitions, could be to give the Secretary of State regulation-making powers for legal but harmful content that are, on the face of them, not limited to physical or psychological harm. 

This may well not be the government's intention. When giving evidence to the Culture Media and Sport Commons Committee last week, Secretary of State Oliver Dowden stressed (14:57 onwards) that regulations would not go beyond physical or psychological harm. This could usefully be explored during pre-legislative scrutiny.  

For legal but harmful content the draft Bill does provide a more developed version of the Full Response’s general definition of harm, tied to impact on a hypothetical adult or child "of ordinary sensibilities". This is evidently an attempt to inject some objectivity into the assessment of harm. It then adds further layers addressing impact on members of particularly affected groups or particularly affected people with certain characteristics (neither specified), impact on a specific person about whom the service provider knows, and indirect impact. These provisions will undoubtedly attract close scrutiny.  

In any event, this complex definition of harm does not have universal application within the draft Bill. It governs only a residual category of content outside the Secretary of State’s designated descriptions of priority harmful content for adults and children. The longer the Secretary of State's lists of priority harmful content designated in secondary legislation, the less ground in principle would be covered by content to which the complex definition applies. The Secretary of State is not constrained by the complex definition when designating priority harmful content. 

Nor, on the face of it, is the Secretary of State limited to physical or psychological harm. However, as already flagged, that may well not represent the intention of the government. That omission would be all the more curious, given that Ofcom has a consultation and recommendation role in the regulation-making process, and the simple definition – physical or psychological harm - does constrain Ofcom’s recommendation remit. 

The Secretary of State has a parallel power to designate priority illegal content (which underpins the section 9(3) duties above) by secondary legislation. He cannot include offences relating to:

-        Infringement of intellectual property rights

-        Safety or quality of goods (as opposed to what kind of goods they are)

-        Performance of a service by a person not qualified to perform it

In considering whether to designate an offence the Secretary of State does have to take into account, among other things, the level of risk of harm being caused to individuals in the UK by the presence of content that amounts to the offence, and the severity of that harm. Harm here does mean physical or psychological harm.  

As with harmful content, illegal content includes a residual category designed to catch illegality neither specifically identified in the draft Bill (terrorism and CSEA offences) nor designated in secondary legislation as priority illegal content. This category consists of “Other offences of which the victim or intended victim is an individual (or individuals).” This, while confined to individuals, is not limited to physical or psychological harm. 

The first round of secondary legislation designating categories of priority illegal and harmful content would require affirmative resolutions of each House of Parliament. Subsequent regulations would be subject to negative resolution of either House.

To the extent that the government’s rowback from the general definition of harm contained in the Full Response enables more vague and subjective kinds of harm to be brought back into scope of service provider duties, the risk of collateral damage to legitimate speech would correspondingly increase.

Internal contradictions

The draft Bill lays down various risk assessments that in-scope providers must undertake, taking into account a ‘risk profile’ of that kind of service prepared by Ofcom and to be included in its guidance about risk assessments.

As well as the Section 9(3) moderation and filtering duties set out above, for illegal content a service provider would be under a duty to take proportionate steps to mitigate and effectively manage the risks of harm to individuals, as identified in the service’s most recent illegal content risk assessment.

In parallel to these duties, the service provider is placed under a duty to have regard to the importance of protecting users’ right to freedom of expression within the law when deciding on, and implementing, safety policies and procedures.

However, since the very duties imposed by the draft Bill create a risk of collateral damage to legitimate speech, a conflict between duties is inevitable. The potential for conflict increases with the scope of the duties and the breadth and subjectivity of their subject matter. 

The government has acknowledged the risk of collateral damage in the context of Category 1 services, which would be subject to duties in relation to lawful content harmful to adults in addition to the duties applicable to ordinary providers.

Category 1 service providers would have to prepare assessments of their impact on freedom of expression and (as interpreted by the government's launch announcement) demonstrate that they have taken steps to mitigate any adverse effects. The government commented:

“These measures remove the risk that online companies adopt restrictive measures or over-remove content in their efforts to meet their new online safety duties. An example of this could be AI moderation technologies falsely flagging innocuous content as harmful, such as satire.” (emphasis added)

This passage acknowledges the danger inherent in the legislation: that efforts to comply with the duties imposed by the legislation would carry a risk of collateral damage by over-removal.  That is true not only of ‘legal but harmful’ duties, but also of the moderation and filtering duties in relation to illegal content that would be imposed on all providers.

No obligation to conduct a freedom of expression risk assessment could remove the risk of collateral damage by over-removal. That smacks of faith in the existence of a tech magic wand. Moreover, it does not reflect the uncertainty and subjective judgement inherent in evaluating user content, however great the resources thrown at it. 

Internal conflicts between duties, underpinned by the Version 3.0 approach to the notion of harm, sit at the heart of the draft Bill. For that reason, despite the government’s protestations to the contrary, the draft Bill will inevitably continue to attract criticism as - to use the Secretary of State's words -  a censor’s charter.