Sunday 24 May 2020

A Tale of Two Committees

Two Commons Committees –the Home Affairs Committee and the Digital, Culture, Media and Sport Committee – have recently held evidence sessions with government Ministers discussing, among other things, the government’s proposed Online Harms legislation. These sessions proved to be as revealing, if not more so, about the government’s intentions as its February 2020 Initial Response to the White Paper.

As a result on some topics we know more than we did, but the picture is still incomplete. Some new issues have surfaced. Other areas have become less clear than they were previously.

Above all, nothing is set in stone. The Initial Response was said to be indicative of a direction of travel and to form an iterative part of a process of policy development. The destination has yet to be reached – if, that is, the government ever gets there at all. It may yet hit a road block somewhere along the way, veer off into a ditch, or perhaps undergo a Damascene conversion should it finally realise the unwisdom of creating a latter-day Lord Chamberlain for the internet. Or the road may eventually peter out into nothingness. At present, however, the government is pressing ahead with its legislative intentions.

I’m going to be selective about my choice of topics, in the main returning to some of the key existing questions and concerns about the Online Harms proposals, with a sprinkling of new issues added for good measure. Much more ground than this was covered in the two sessions.

Borrowing from the old parlour game, each topic starts with what the White Paper said; followed by what the Initial Response said; then what the Ministers said; and lastly, the Consequence. The Ministers are Oliver Dowden MP (Secretary of State for Digital, Culture, Media and Sport); Caroline Dinenage MP (Minister for Digital and Culture) and Baroness Williams (Lords Minister, Home Office).  

Sometimes the government’s Initial Response to Consultation recorded consultation submissions, but came to no conclusion on the topic. In those instances the Initial Response is categorised as saying ‘Nothing’. Some repetitive statements have been pruned.

Since this is a long read, here is a list of the selected topics:


1. Will Parliament or the regulator decide what “harm” means?


The White Paper said:

“… government action to tackle online content or activity that harms individual users, particularly children, or threatens our way of life in the UK, either by undermining national security, or by reducing trust and undermining our shared rights, responsibilities and opportunities to foster integration.”

“This list [Table 1, Online harms in scope] is, by design, neither exhaustive nor fixed. A static list could prevent swift regulatory action to address new forms of online harm, new technologies, content and new online activities.”

The Initial Response said:

Nothing.

The Ministers said:

Oliver Dowden: “The only point that I have tried to make is that I am just keen on this proportionality point because it is often the case that regulation that starts out with the best of intentions can, in its interpretation if you do not get it right, have a life of its own. It starts to get interpreted in a way that Parliament did not intend it to be in the first place. I am just keen to make sure we put those kinds of hard walls around it so that the regime is flexible but that in its interpretation it cannot go beyond the intent that we set out in the first place in the broad principles.” (emphasis added)

Caroline Dinenage: “For what you might call the “legal but harmful” harms, we are not setting out to name them in the legislation. That is for the simple reason that technology moves on at such a rapid pace that it is very likely that we would end up excluding something….  We want to make sure that this piece of legislation will be agile and able to respond to harms as they emerge. The legislation will make that clearer, but it will be for the regulator to outline what the harms are and to do that in partnership with the platforms.” (Q.554) (emphasis added)

The Consequence: It is difficult to reconcile the desire of the Secretary of State to erect “hard walls”, in order to avoid unintended consequences, with the government’s apparent determination to leave the notion of harm undefined, delegating to the regulator the task of deciding what counts as harmful. This kind of approach has serious implications for the rule of law.

Left undelineated, the concept of harm is infinitely malleable. The Home Office Minister Baroness Williams suggested in the Committee session that 5G disinformation could be divided into “harmless conspiracy theories” and “that which actually leads to attacks on engineers”, as well as a far-right element. One Committee member (Ruth Edwards M.P.) responded that she did not think that any element of the conspiracy theory could be categorised as ‘harmless’, because “it is threatening public confidence in the 5G roll-out” — a proposition with which the DCMS Minister Caroline Dinenage agreed.

Harm is thus equated with people changing their opinion about a telecommunications project. This unbounded sense of harm is on a level with the notorious “confusing our understanding of what is happening in the wider world” phraseology of the White Paper.  

Statements such as the concluding peroration by Baroness Williams: “I, too, want to make the internet a safer place for my children, and exclude those who seek to do society harm” have to be viewed against the backdrop of an essentially unconstrained meaning of harm.

When harm can be interpreted so broadly, the government is playing with fire. But it is we  not the government, the regulator or the tech companies  who stand to get our fingers burnt.

2. The regulator’s remit: substance, process or both?


The White Paper said:

“In particular, companies will be required to ensure that they have effective and proportionate processes and governance in place to reduce the risk of illegal and harmful activity on their platforms, as well as to take appropriate and proportionate action when issues arise. The new regulatory regime will also ensure effective oversight of the take-down of illegal content, and will introduce specific monitoring requirements for tightly defined categories of illegal content.” (6.16)

The Initial Response said:

“The approach will be proportionate and risk-based with the duty of care designed to ensure companies have appropriate systems and processes in place to improve the safety of their users.”

“The focus on robust processes and systems rather than individual pieces of content means it will remain effective even as new harms emerge. It will also ensure that service providers develop, clearly communicate and enforce their own thresholds for harmful but legal content.

“The kind of processes the codes of practice will focus on are systems, procedures, technologies and investment, including in staffing, training and support of human moderators.”

“As such, the codes of practice will contain guidance on, for example, what steps companies should take to ensure products and services are safe by design or deliver prompt action on harmful content or activity.”

“Rather than requiring the removal of specific pieces of legal content, regulation will focus on the wider systems and processes that platforms have in place to deal with online harms, while maintaining a proportionate and risk-based approach.”

“In fact, the new regulatory framework will not require the removal of specific pieces of legal content. Instead, it will focus on the wider systems and processes that platforms have in place to deal with online harms, while maintaining a proportionate and risk-based approach.”

“Of course, companies will be required to take particularly robust action to tackle terrorist content and online Child Sexual Exploitation and Abuse. The new regulatory framework will not remove companies’ existing duty to remove illegal content.”

The Ministers said:

Caroline Dinenage: “the codes of practice are really about systems and processes, rather than naming individual harms in the legislation. There are two exceptions to that: there will be codes of practice around child sexual exploitation and terrorist content, because those are both illegal.” (Q554)

“It is for the regulator to set out codes of practice, but they won’t be around individual harms; they will be around systems and processes—what we expect the companies to do. Rather than focusing on individual harms, because we know that the technology moves on so quickly that there could be more, it is a case of setting out the systems and processes that we would expect companies to abide by, and then giving the regulator the opportunity to impose sanctions on those that are not doing so.” (Q.556)

Q562 Stuart C. McDonald: “…if the regulator feels that algorithms are working inappropriately and directing people who have made innocent searches to, say, far-right content, will they be able to order, essentially, the company to make changes to how its algorithms are operating?


Caroline Dinenage: Yes, I think that they will. That is clearly something that we will set out in the full response. The key here is that companies must have clear transparency, they must set out clear standards, and they must have a clear duty of care. If they are designing algorithms that in any way put people at risk, that is, as I say, a clear design choice, and that choice carries with it a great deal of responsibility. It will be for the regulator to oversee that responsibility. If they have any concerns about the way that that is being upheld, there are sanctions that they can impose.”

The Consequence: As with the specific issue around the status of terms and conditions for “lawful but harmful” content (see below), it is difficult to see how a bright line can be drawn between substance and process.  Processes cannot be designed, risk-assessed or their effectiveness evaluated in the abstract — only by reference to goals such as improving user safety and reducing risk of harm. A duty of care evaluated without reference to the kind of harm intended to be guarded against makes no more sense than the smile without the Cheshire Cat. 

In Caparo v Dickman Lord Bridge cautioned against discussing duties of care in the abstract:
"It is never sufficient to ask simply whether A owes B a duty of care. It always necessary to determine the scope of the duty by reference to the kind of damage from which A must take care to save B harmless."

Risk assessment is familiar in the realm of safety properly so-called: danger of physical injury, where there is a clear understanding of what constitutes objectively ascertainable harm. It breaks down when applied to undefined, inherently subjective harms arising from users' speech. If "threatening public confidence in the 5G roll-out” (see above) can be labelled an online harm within scope of the legislation, that goes far beyond any tenable concept of safety.

The government’s approach appears to be to adopt different approaches to illegal and “legal but harmful”, the latter avowedly restricted to process (although see next topic as to how far that can really be the case). 

In passing, the Initial Response is technically incorrect in referring to “companies’ existing duty to remove illegal content”. No such general duty exists. Hosting providers lose the protection of the ECommerce Directive liability shield if they do not remove unlawful content expeditiously upon gaining actual or (for damages) constructive knowledge of the illegality. Even then, the eCommerce Directive does not oblige them to remove it. The consequence is that they become exposed to the risk of possible liability (which may or may not exist) under the relevant underlying law (see here for a fuller explanation). In practice that regime strongly incentivises hosting providers to remove illegal content upon gaining relevant knowledge. But they have no general legal obligation to do so.


3. For “lawful but harmful” content seen by adults, will the regulator be interested only in whether intermediaries are enforcing whatever content standards they choose to put in their TandCs?


The White Paper said:

“As indication of their compliance with their overarching duty of care to keep users safe, we envisage that, where relevant, companies in scope will:

  • Ensure their relevant terms and conditions meet standards set by the regulator and reflect the codes of practice as appropriate.
  • Enforce their own relevant terms and conditions effectively and consistently. …”
“To help achieve these outcomes, we expect the regulator to develop codes of practice that set out: 

  • Steps to ensure products and services are safe by design.
  • Guidance about how to ensure terms of use are adequate and are understood by users when they sign up to use the service. …
  • Steps to ensure harmful content or activity is dealt with rapidly. …
  • Steps to monitor, evaluate and improve the effectiveness of their processes.”
The Initial Response said:

“We will not prevent adults from accessing or posting legal content, nor require companies to remove specific pieces of legal content. The new regulatory framework will instead require companies, where relevant, to explicitly state what content and behaviour is acceptable on their sites and then for platforms to enforce this consistently.”

“To ensure protections for freedom of expression, regulation will establish differentiated expectations on companies for illegal content and activity, versus conduct that is not illegal but has the potential to cause harm. Regulation will therefore not force companies to remove specific pieces of legal content. The new regulatory framework will instead require companies, where relevant, to explicitly state what content and behaviour they deem to be acceptable on their sites and enforce this consistently and transparently. All companies in scope will need to ensure a higher level of protection for children, and take reasonable steps to protect them from inappropriate or harmful content.”

“Recognising concerns about freedom of expression, the regulator will not investigate or adjudicate on individual complaints. Companies will be able to decide what type of legal content or behaviour is acceptable on their services, but must take reasonable steps to protect children from harm. They will need to set this out in clear and accessible terms and conditions and enforce these effectively, consistently and transparently.”

The Ministers said:

Oliver Dowden: “The essence of online harms legislation is holding social media companies to what they have promised to do and to their own terms and conditions. My focus in respect of those is principally on two things: underage harms and illegal harms. Clearly, the trickiest category is legal adult harms. In respect of that, we are looking at how we tighten the measures to ensure that those companies actually do what they promised they would do in the first place, which often is not the case.” (Q20) (emphasis added)

“Clearly, in respect of legal adult harms, that is the underlying principle anyway in the sense that what we are really trying to do is say to those social media companies and tech firms, “Be true to what you say you are doing. Just stick by your terms and conditions”. We would ask the regulator to make sure that it is enforcing them, and then have tools at our disposal to require it to do so.” (Q89) (emphasis added)

Caroline Dinenage: “A lot of this is about companies having the right regulations and standards and duty of care, and that will also be in the online harms Bill and online harms work. If we can have more transparency as to what platforms regard as acceptable—there will be a regulator that will help guide them in that process—I think we will have a much better opportunity to tackle those things head-on.” (Q513) (emphasis added)

“With regard to our role in DCMS, it is more as a co-ordinator bringing together the work of all the different Government Departments and then liaising directly with the platforms to make sure that their standards, their regulations, are reflective of some of the concerns that we have—make sure, in some cases, that harmful content can be anticipated and therefore prevented, and, where that is not possible, where it can be stopped and removed as quickly as possible.” (emphasis added) (Q525)

Baroness Williams: “There is obviously that which is illegal and that which breaches the CSPs’ terms of use. It is that latter element, particularly in the area of extremism, on which we have really tried to engage with CSPs to get them to be more proactive.” (emphasis added) (emphasis added) (Q.527)

The Consequence: This is now one of the most puzzling areas of the government’s developing policy. The White Paper expected that codes of practice would ensure that terms and conditions meet “standards set by the regulator” and that terms of use are “adequate”. These statements were not on the face of them limited to procedural standards and adequacy. They could readily be interpreted as encompassing standards and adequacy judged by reference to harm reduction goals determined by the regulator (which, as we have seen, would be able to decide for itself what constitutes harm) – in other words, extending to the substantive content of intermediaries' terms and conditions.

When the Initial Response was published, great play was made of the shift to a differentiated duty of care: that it would be up to the intermediary to decide – for lawful content for adults - what standards to put in its terms and conditions. 

The remit of the regulator would be limited to ensuring those standards are clearly stated and enforced “consistently and transparently” (or “effectively, consistently and transparently”, depending on which part of the Initial Response you turn to; or “effectively and consistently”, according to the White Paper). Indeed the Secretary of State said in evidence that "The essence of online harms legislation is holding social media companies to what they have promised to do and to their own terms and conditions

But it seems from the other Ministers’ responses that the government has not disclaimed all interest in the substantive content of intermediaries’ terms and conditions. On the contrary, the government evidently sees it as part of its role to influence (to put it at its lowest) what goes into them. If the regulator’s task is to ensure enforcement of terms and conditions whose substantive content reflects the wishes of a government department, that is a far cry from the proclaimed freedom of intermediaries to set their own standards of acceptable lawful content.

Ultimately, what can be the point of emphasising how, in the name of upholding freedom of speech, the role of an independent regulator will be limited to enforcing the intermediaries’ own terms and conditions, if the government considers that part of its own role is to influence those intermediaries as to what substantive provisions those TandCs should contain?

This is one aspect of an emerging issue about division of responsibility between government and the regulator. It is tempting to think that once an independent regulator is established the government itself will withdraw from the fray. But if that is not so, then reducing the remit of the independent regulator concomitantly increases the scope for the government itself to step in.

That is especially pertinent in the light of the government’s desire to cast itself as a ‘trusted flagger’, whose notifications of unlawful content the intermediaries should act upon without question. Thus Caroline Dinenage appears to regard the platforms as obliged to remove anything that the government has told them it considers to be illegal (with no apparent requirement of prior due process such as independent verification), and would like them to take seriously anything else that the government notifies to them:

“We have found that we have become—I forget the proper term, but we have become like a trusted flagger with a number of the online hosting companies, with the platforms. So when we flag information, they do not have to double-check the concerns we have. Clearly, unless something is illegal, we cannot tell organisations to take it down; they have to make their own decision based on their own consciences, standards and requirements. But clearly we are building up a very strong, trusted relationship with them to ensure that when we flag things, they take it seriously.” (Emphasis added)


4. Codes of Practice for specific kinds of user content or activity?


The White Paper said:

“[T]he White Paper sets out high-level expectations of companies, including some specific expectations in relation to certain harms. We expect the regulator to reflect these in future codes of practice.”

It then set out a list of 11 harms, accompanied in each case by a list of areas in relation to that harm that it expected the regulator to include in a code of practice. For instance, in relation to disinformation a list of 11 specific areas included:

“Steps that companies should take in relation to users who deliberately misrepresent their identity to spread and strengthen disinformation.”; and

“Promoting diverse news content, countering the ‘echo chamber’ in which people are only exposed to information which reinforces their existing views.”

The Initial Response said:

“The White Paper talked about the different codes of practice that the regulator will issue to outline the processes that companies need to adopt to help demonstrate that they have fulfilled their duty of care to their users. … We do not expect there to be a code of practice for each category of harmful content, however, we intend to publish interim codes of practice on how to tackle online terrorist and Child Sexual Exploitation and Abuse (CSEA) content and activity in the coming months.”

The Ministers said:

Caroline Dinenage: I think I need to clear up a bit of a misunderstanding about the White Paper. The 11 harms that were listed were really intended to be an illustrative list of what we saw as the harms. The response did not expect a code of practice for each one, because the codes of practice are really about systems and processes, rather than naming individual harms in the legislation. There are two exceptions to that: there will be codes of practice around child sexual exploitation and terrorist content, because those are both illegal.” (Q.554) (emphasis added)

The Consequence: The different approach to CSEA and terrorism probably owes more to the different areas of responsibility of the Home Office and the DCMS than to any dividing line between illegality and non-illegality. The White Paper covers many more areas of illegality than those two alone.

5. Search engines in scope?


The White Paper said:

“… will apply to companies that allow users to share or discover user-generated content, or interact with each other online.” (emphasis added)

“These services are offered by…  search engines” (Executive Summary)

The Initial Response said:

“The legislation will only apply to companies that provide services or use functionality on their websites which facilitate the sharing of user generated content or user interactions, for example though comments, forums or video sharing” (emphasis added)

The Ministers said:

Caroline Dinenage: Again, we are probably victims of the fact that we published an interim response, which was not as comprehensive as our full response will be later on in the year. The White Paper made it very clear that search engines would be included in the scope of the framework and the nature of the requirements will reflect the type of service that they offer. We did not explicitly mention it in the interim response, but that does not mean that anything has changed. It did not cover the full policy. Search engines will be included and there is no change to our thoughts and our policy on that.”   (Q.560)

The Consequence: Notwithstanding the Minister’s explanation, the alterations in wording between the White Paper and the Initial Response (omitting “discover”, adding “only”) had the appearance of a considered change. The lesson for the future is perhaps that it would be unwise to parse too closely the text of anything else said or written by the government.

6. Everything from social media platforms to retail customer review sections?


The White Paper said:

“… companies of all sizes will be in scope of the regulatory framework. The scope will include… social media companies, public discussion forums, retailers that allow users to review products online, along with non-profit organisations, file sharing sites and cloud hosting providers.” (emphasis added)

The Initial Response said:

“To be in scope, a business would have to operate its own website with the functionality to enable sharing of user-generated content, or user interactions.”

The Ministers said:

Oliver Dowden: “We are a Europe leader in this. I have seen, as I am sure you have seen, the unintended consequences of good-intended legislation then having bureaucratic implications and costs on businesses that we want to avoid.

For example, in respect of legal online harms for adults, if you are an SME retailer and you have a review site on your website for your product and people can put comments underneath that, that is a form of social media. Notionally, that would be covered by the online harms regime as it stands. The response to that is they will go through this quick test and then they will find it does not apply to them. My whole experience of that for SMEs and others is that it is all very well saying that when you are sat have no idea what this online harms thing is, this potentially puts a big administrative burden on you. (emphasis added)

Are there ways in which we can carve out those sorts of areas so we focus on where we need to do it? Those kinds of arguments pertain less to illegal harms and harms to children. I hope that gives you a flavour of it.” (Q.88)

Q89 Damian Hinds: “Yes, quite so. I think in the previous announcement there was quite a high estimate of the number of firms or proportion of total firms that would somehow be counted in the definition of an online platform, which was rather a disturbing thought. It would be very welcome, what you can do to limit the scope of who counts as a social media platform.”

The Consequence: This exchange does shine a light on the expansive scope of the proposed legislation. The Secretary of State said that SME retailers with review sections were “notionally” covered. However, there was nothing notional about it.  Retailer review sections were expressly included in the White Paper, as were companies of all sizes.

As the Secretary of State suggests, it is little comfort for an SME to be told “don’t worry, you’ll be low risk so it won’t really apply to you” if: (a) you are in scope on the face of it, and (b) it is left to the regulator to decide whether the duty of care should bear less heavily on some intermediaries than others. 

There are, of course, many other kinds of non-social media platform intermediary who are in scope as well as SME retailers with review sections: apps, online games, community discussion forums, non-profits and many other online services.  The Initial Response said “Analysis so far suggests that fewer than 5% of UK businesses will be in scope of this regulatory framework.” Whether 5% is considered to be small or large in absolute terms (not to mention the apparent indifference to non-UK businesses), there has been no indication of the assumptions underlying that estimate.

7. Will journalism and the press be excluded from scope?


The White Paper said:

Nothing. In a subsequent letter to the Society of Editors the then DCMS Secretary of State Jeremy Wright said:

“… as I made clear at the White Paper launch and in the House of Commons, where these services are already well regulated, as IPSO and IMPRESS do regarding their members' moderated comment sections, we will not duplicate those efforts. Journalistic or editorial content will not be affected by the regulatory framework.”

The Initial Response said:

Nothing. It limited itself to general expressions of support for freedom of expression, such as:
“…freedom of expression, and the role of a free press, is vital to a healthy democracy. We will ensure that there are safeguards in the legislation, so companies and the new regulator have a clear responsibility to protect users’ rights online, including freedom of expression and the need to maintain a vibrant and diverse public square.”

The Ministers said:

Caroline Dinenage: Obviously, we know that a free press is one of the pillars of our society, and the White Paper, I must say from the outset, is not seeking to prohibit press freedom at all, so journalistic and editorial content is not in the scope of the White Paper. Our stance on press regulation has not changed.” (Emphasis added)

“As for what has been in the papers recently, the Secretary of State wrote a letter to the Society of Editors, and this was about what you might call the below-the-line or comments section. They were concerned that that might be regulated. I think what the Secretary of State is saying is that, where there is already clear and effective moderation of that sort of content, we do not intend to duplicate it. For example, there is IPSO and IMPRESS activity on moderated content sections. Those are the technical words for it. This is still an ongoing conversation, so we are working at the moment with stakeholders to develop proposals on how we are going to reflect that in legislation, working around those parameters. (Q.558)

“Stuart C. McDonald: But there is no suggestion that below-the-line remains unregulated. It is where that regulation should lie that is the issue.

Caroline Dinenage: Exactly.” (Q.559)

The Consequence: There are three distinct issues around inclusion or exclusion of the press from the regulatory scope of the Bill:

1. User comments on newspaper websites.  On the face of it, news organisations would be subject to the duty of care as regards user comments on their websites. The position of the government appears to be that whether the duty of care would apply would depend on whether the comments are already subject to another kind of regulation (or at least the existence of “clear and effective moderation”). Potentially, therefore, newspapers that are not regulated by IPSO or IMPRESS would be in scope for this purpose. Whether this demarcation would be achieved by a hard scope exclusion written into the Bill is not clear.

2. Journalistic or editorial material. Whilst the Minister may say that the government’s stance on press regulation has not changed, her statement that journalistic and editorial content is not “in the scope” of the White Paper is new — at least if we are to understand that as meaning that the Bill would contain a hard scope exclusion for journalistic or editorial content. Previously the government had said only that such content would not be affected by the regulatory framework. A general exclusion of journalistic or editorial material would on the face of it go much wider than newspapers and similar publications. It would be no surprise to find this statement being “clarified” at some point in the future.

3. Newspaper social media feeds and pages. Newspapers and other publications maintain their own pages, feeds and blogs on social media and other platforms. Newspapers would not themselves be subject to a duty of care in relation to their own content. But as far as the platforms are concerned the newspapers are users, so that their pages and feeds would fall under the platforms’ duty of care. As such, they would be liable to have action taken against their content by a platform in the course of fulfilling its own duty of care.

The government has said nothing about whether, and if so how, such press content would be excluded from scope. If the government is serious about excluding “journalistic or editorial” material generally from scope, that would achieve this. However that would create immense difficulties around whether a particular feed or page is or is not journalistic or editorial material (what about this Cyberleagle blog, or the Guido Fawkes blog, for instance?), and how a platform is supposed to decide whether any particular content is or is not in scope.  

8. End to end encryption


The White Paper said:

Nothing. (Although the potential for the duty of care to be applied to prevent the use of end to end encryption was evident.)

The Initial Response said:

Nothing.

The Ministers said:

Baroness Williams: “[Facebook] then announced that they were going to end-to-end encrypt Messenger. That, for us, is gravely worrying, because nobody will be able to see into Messenger. I know there is going to be a Five Eyes engagement next week, and I do not know if the Committee knows, but the Five Eyes wrote to Mark Zuckerberg last year, so worried were we about this development.” (Q538)

Q566 Chair: “On that basis, does end-to-end encryption count as a breach of duty of care?

Baroness Williams:It is criminal activity that would breach the duty of care. Allowing criminal activity to happen on your platform would be the breach of duty of care. End-to-end encryption, in and of itself, is not a breach of duty of care.

Chair: Presumably, for this regulation to have any bite at all, they will have to be able to take some enforcement against the policies that fail to prevent criminal activity. On that logic, introducing the end-to-end encryption, if it knowingly stops the company from preventing illegal activity—for example, the kind of online child abuse you have talked about—that would surely count as a breach of duty of care.

Baroness Williams: I fully expect that that is what some of the Five Eyes discussions, which will be happening very shortly, will look at.”

The Consequence: This is the first indication that the government is alive to the possibility that a regulator might be able to interpret a duty of care so as to affect the ability of an intermediary to use end to end encryption. The “in and of itself” phraseology used by the Minister appears not to rule that out. This issue is related to the question of how the legislation might apply to private messaging providers, a topic on which the government has consulted but has not yet published a conclusion.

9. Identity verification


The White Paper said:

“The internet can be used to harass, bully or intimidate. In many cases of harassment and other forms of abusive communications online, the offender will be unknown to the victim. In some instances, they will have taken technical steps to conceal their identity. Government and law enforcement are taking action to tackle this threat.”

“The police have a range of legal powers to identify individuals who attempt to use anonymity to escape sanctions for online abuse, where the activity is illegal. The government will work with law enforcement to review whether the current powers are sufficient to tackle anonymous abuse online.”

“Some of the areas we expect the regulator to include in a code of practice are:

  • Steps to limit anonymised users abusing their services, including harassing others. …
  • Steps companies should take to limit anonymised users using their services to abuse others.”

The Initial Response said:

Nothing.

The Ministers said:

Q25 John Nicolson: Would you like to see online harms legislation compel social media companies to verify the identity of users, not of course to publish them but simply to verify them before the accounts are up and running?

Oliver Dowden: There is certainly a challenge around, as you mentioned, bots, which are sometimes used by hostile state activity, and finding better ways of verifying to see whether these are genuine actors or whether it is co-ordinated bot-type activity. That is through online harms but there is obviously a national security angle to that as well.”

Q530 Ms Abbott: “Finally, would you consider changing the regulation, so you could post anonymously on a website or Twitter or Facebook, but the online platform would have your name and address? In my experience, when you try to pursue online abuse, you hit a brick wall because the abuser is not just anonymous when they post, the online platform doesn’t have a name and address either.

Caroline Dinenage: That is a really interesting idea. It is definitely something that we have been discussing. With regard to the online harms legislation that we are putting together at the moment, we have said very clearly that companies need to be much more transparent. They need to set out standards and they need to clarify what their duty of care is and to have a robust complaints procedure that people can use and can trust in. That is why we are also appointing a regulator that will set out what good looks like and will have expectations but also powers to be able to demand data and information and to be able to impose sanctions on those that they do not feel are abiding by them.

Q531 Chair: What does that actually mean? Does that mean that you think that the regulator should have the power to say that social media companies should not allow people to be … [a]nonymous to the platform?

Caroline Dinenage: This is something that we are considering at the moment. There are a number of things here. In the online harms legislation, the regulator will set out their expectations.

Chair: We can’t devolve everything to the regulator. Something like this is really important—should social media companies be allowed to not know who it is that is using their platforms? That feels like a big question that Parliament should take a view on, not something we just hand over to a regulator and say, “Okay, whatever you think,” later on.

Caroline Dinenage: Yes, exactly. That is why we are considering it at the moment, as part of the online harms legislation, and that, of course, will come before Parliament.”

Q545 Tim Loughton: “… If I want to set up a bank account and all sorts of other accounts, I must prove to the bank or organisation who I am by use of a utility bill and other things like that. It is quite straightforward. What is the downside of a similar requirement being enforced by social media platforms before you are allowed to sign up for an account? This is an issue that we have looked at before on the Committee. Many of us have suggested that we should go down that route. I gather that it already happens in South Korea. You say that you are looking at it, Minister Dinenage. What, in your view, is the downside of having such scrutiny?

Caroline Dinenage: You make a very compelling argument, Mr Loughton. A lot of what you said is extremely correct. The only thing we are mulling over and trying to cope with is whether there is any reason for anonymity for people who are victims, who want to be able to whistleblow, and who may be overseas and might not want to identify themselves because they fear for their lives or other harm. There are those issues of anonymity and protecting someone’s safety and ability to speak up. That is what we are wrestling with.

Q546 Tim Loughton: By the same token, you could have somebody with a fake identity who is falsely whistleblowing or pushing around propaganda, so it cuts both ways. I fail to see the downside of having a requirement that you have to prove who you are—not least because we know what happens when people are caught and have their sites taken down. Five minutes later, they set up another new anonymous site peddling the same sort of false information.

Caroline Dinenage: You make a very compelling argument. This is such an important piece of legislation, and we have to get it right. As I say, it is world-leading. Everybody is looking at us to see how we do it. We need to make sure that we have taken into consideration every angle, and that is what we are doing at the moment.”

The Consequence: Identity verification is evidently an issue that is bubbling to the surface. The most fundamental objection is that the right of freedom of expression secured by Article 19 of the Universal Declaration of Human Rights is not conditioned upon identity verification. It does not say:

"Everyone has the right to freedom of opinion and expression upon production of any two of the following: driving licence, passport, recent utility or council tax bill...".

In South Korea, legislation imposing online identity verification obligations was declared unconstitutional in 2012.

The Home Affairs Committee raised, to the best of my knowledge for the first time in any Parliamentary deliberation on the Online Harms project, the question of what should be decided by Parliament and what delegated to a regulator. That is not limited to the question of identity verification. It is an inherent vice of regulatory powers painted with such a broad brush that many concrete issues will lie hidden behind abstractions, to surface only when the regulator turns its light upon them – by which time it is far too late to object that the matter should have been one for Parliament to decide. That vice is compounded when the powers affect the individual speech of millions of people.

10. Extraterritoriality


The White Paper said:

“The new regulatory regime will need to handle the global nature of both the digital economy and many of the companies in scope. The law will apply to companies that provide services to UK users.” (6.9) (emphasis added)

“We are also considering options for the regulator, in certain circumstances, to require companies which are based outside the UK to appoint a UK or EEA-based nominated representative.” (6.10)

The Initial Response said:

Nothing of relevance.

The Ministers said:

“Q569: Andrew Gwynne: Presumably the regulations will apply to all content visibly available in the UK—is that correct?

Baroness Williams: Yes.”

The Consequence: Charitably, perhaps we should assume that the Minister misspoke. There is a vast difference between providing services to UK users and mere visibility in the UK. Given the inherent cross-border nature of the internet, asserting a country’s local law against content on a mere visibility basis is tantamount to asserting world-wide extra-territoriality. 

It would be more consistent with the direction in which internet jurisdictional norms have moved over the last 25 years to apply a test of whether the provider is targeting the UK.


No comments:

Post a Comment

Note: only a member of this blog may post a comment.