Sunday, 2 February 2020

Online Harms IFAQ*


*Insufficiently Frequently Asked Questions

Eager Student has some questions for Scholarly Lawyer about the UK government’s Online Harms White Paper.

ES. Why are people so perturbed about the proposed Online Harms legislation?

SL. Most people aren’t, even if they have heard of it. But they should be.

ES. What can be wrong with preventing harm?

SL. Indeed, who could be against that? But this is a proposal for legislation. We have to peel off the label and check what is in the bottle.  When we look inside, we find a wormhole back to a darker age.

ES. A darker age?

SL. Consider the days when unregulated theatres were reckoned to be a danger to society and the Lord Chamberlain censored plays. That power was abolished in 1968, to great rejoicing. The theatres were liberated. They could be as rude and controversial as they liked, short of provoking a breach of the peace.

The White Paper proposes a Lord Chamberlain for the internet. Granted, it would be an independent regulator, similar to Ofcom, not a royal official. It might even be Ofcom itself. But the essence is the same.  And this time the target would not be a handful of playwrights out to shock and offend, but all of us who use the internet.

ES. This is melodramatic, surely. How could a digital Lord Chamberlain censor the tens of millions of us who use the internet and social media?

SL. Would the regulator personally strike a blue pencil through specific items of content? No. But the legislation is no less censorious for that. It would co-opt internet platforms and search engines to do the work, under the banner of a duty of care articulated and enforced by the regulator.

ES. Doesn’t the internet route around censorship?

SL. Once your post or tweet has passed through your ISP’s system it is on the internet, chopped up into fragments and bouncing around an interconnected mesh of routers: very difficult to isolate and capture or block.

But your own ISP has always been a choke point. It can be turned into a gatekeeper instead of a gateway. That danger was recognised early on when Article 15 of the ECommerce Directive prevented EU governments from doing that.

Nowadays we communicate via large platforms: social media companies, messaging apps and the rest. Our tweets, posts and instant messages end up in a few databases. So we have new choke points.  Companies may be able to block, filter and take down users’ content on their platforms, even if broad brush rather than blue pencil. Our digital Lord Chamberlain would commandeer that capability at one remove,

ES. Even so, where is the censorship? This is about online safety, not suppressing controversy.

SL. The particular vice at the heart of the White Paper is the latitude for the regulator to deem things to be harmful. If the proposals were only about safety properly so-called, such as risk of death and personal injury, that would correspond to offline duties of care and draw a clear line. Or if the proposals were only about unlawful online behaviour, the built-in line between lawful and unlawful would provide some protection against overreach. But the proposals are neither, and they do not.

ES. If something is not presently unlawful, would the White Paper proposals make it so?

SL. Yes and no. Asserting that you have been harmed by reading something that I have posted online is not a basis for a legal claim against me, even if you are shocked, outraged, offended or distressed by it. There would have to be more, such as incitement or harassment. That is the same rule as for publishing offline.  

But the White Paper’s duty of care could require a platform to take action against that same post, regardless of its lawfulness. It might extend to blocking, filtering or removing. The platform could be fined if it didn’t have good enough systems for reducing the risk of harm that the regulator deemed to exist from that kind of content.

ES. So the platform could face legal consequences for failing to censor my material, even though I have done nothing unlawful by writing and posting it?

SL. Exactly so.

ES. I’m curious as to why would anyone would set up a new regime to address behaviour that is already unlawful.

SL. To create a new law enforcement route. A digital Lord Chamberlain would co-opt online intermediaries to the task: mixing our metaphors horribly, an online sheriff riding out with a posse of conscripted deputies to bring order to Digital Dodge. Whether that is a good way of going about things is a matter of highly contested opinion.

But the genesis of the White Paper was in something quite different from unlawful behaviour: lawful online content and behaviour said to pose a risk of harm, whether to an individual user or to society.

ES. Is it really right that the regulator could treat almost anything as harmful?

SL. The White Paper proposes to impose a duty of care on online intermediaries in respect of harm. It offers no definition of harm, nor what constitutes risk of harm. Apart from a few specific exclusions, that would be open to interpretation by the new online regulator.

ES. A duty of care. Like health and safety?

SL. The name is similar, but the White Paper’s duty of care bears scant resemblance to comparable offline duties of care.

ES. How does it differ?

SL. In two main ways. First, corresponding duties of care that have been developed in the offline world are limited to safety in the acknowledged sense of the word: risk of death, personal injury and damage to physical property.  Those are objective concepts which contribute to keeping duties of care within ascertainable and just limits. Universal, generalised duties of care do not exist offline.  

The White Paper duty of care is not limited in that way. It speaks only of harm, undefined. Harm is a broad concept. Harm and safety become fluid, subjective notions when applied to speech. We can find ourselves asserting that speech is violence and that we are entitled to be kept safe not just from threats and intimidation, but from opinions and facts that we find disagreeable.

The online regulator could adopt the standard of the most easily offended reader. It could re-invent blasphemy under the umbrella of the duty of care. It could treat the internet like a daytime TV show. Ofcom, let us not forget, suggested to survey participants in 2018 that bad language is a “harmful thing”. In last year’s survey it described “offensive language” as a “potential harm”.

Undefined harm, when applied to speech, is not just in the eye and ear of the beholder but in the opinion of the regulator.  The duty, and thus the regulator’s powers, would deliberately go far wider than anything that is contrary to the law.

ES. So even if our digital Lord Chamberlain is not censoring specific items, it still gets to decide what kind of thing is and is not harmful?

SL. Yes.  Writing a Code of Practice can be as censorious as wielding a blue pencil.

ES. And the second difference from an offline duty of care?

SL. Health and safety is about injury caused by the occupier or employer: tripping over a loose floorboard, for instance.  A duty of care would not normally extend to injury inflicted by one visitor on another. It could do if the occupier or employer created or exacerbated the specific risk of that happening, or assumed responsibility for the risk.

But least of all is there an offline duty of care in respect of what one visitor says to another. That is exactly the kind of duty that the White Paper proposals would impose on online intermediaries.

The White Paper is twice removed from any comparable offline duty of care.

ES. Could they not define a limited version of harm?

SL.  John Humphrys suggested that on the Today programme back in April last year. The New Zealand Harmful Digital Communications Act passed in 2015 did define harm: “serious emotional distress”. That has the merit of focusing on a significant aspect of the mischief that is so often associated with social media: bullying, intimidation, abuse and the rest. But that would be a small sliver of the landscape covered by the White Paper.

Recently one of the progenitors of the plan for a duty of care, the Carnegie UK Institute, published its own suggested draft Bill. It does not define or limit the concept of harm.

ES. Is there a disadvantage in defining harm?

SL. That depends on your perspective. A clear, objective boundary for harm would inevitably shrink the regulator’s remit and limit its discretion. It would have to act within the boundaries of a set of rules. So if your aim is to maximise the power and flexibility of the regulator, then you don’t want to cramp its style by setting clear limits. From this viewpoint vagueness is a virtue.

Opinions will differ on whether that is a good or a bad thing. Traditionally the consensus has been that individual speech should be governed by rules, not discretion. Discretion, it might be thought, is a defining characteristic of censorship.

The very breadth of the territory covered by the White Paper and its predecessor Internet Safety Green Paper may explain why the objection to undefined harm is so steadfastly ignored. A catch-all is no longer a catch-all once you spell out what it catches.

ES. You say this is about individual speech, but isn’t the point to regulate the platforms?

SL. It is true that the intermediaries would be the direct subject of the duty of care. They would suffer the penalties if they breached the duty. But we users are the ones who would suffer the effects of our online speech being deemed harmful. Any steps that the platforms were obliged to take would be aimed at what we do and say online. Users are the ultimate, if indirect, target.

ES. I have seen it said that the duty of care should focus on platforms’ processes rather than on specific user content. How might that work?

SL. Carnegie has criticised the White Paper as being overly focused on types of content. It says the White Paper opened up the government to “(legitimate) criticism from free speech campaigners and other groups that this is a regime about moderation, censorship and takedown”. Instead the regime should be about designing services that “hold in balance the rights of all users, reduce the risk of reasonably foreseeable harms to individuals and mitigate the cumulative impact on society.”

To this end Carnegie proposes a ‘systemic’ approach: “cross-cutting codes which focus on process and the routes to likely harm”. For the most part its draft Bill categorises required Codes of Practice not by kinds of problematic speech (misinformation and so on) but in terms of risk assessment, risk factors in service design, discovery and navigation procedures, how users can protect themselves from harm, and transparency. There would be specific requirements on operators to carry out risk assessments, risk minimisation measures, testing and to be transparent.

Carnegie has said: “The regulatory emphasis would be on what is a reasonable response to risk, taken at a general level. In this, formal risk assessments constitute part of the harm reduction cycle; the appropriateness of responses should be measured by the regulator against this.”

ES. Would a systemic approach make a difference?

SL. The idea is that a systemic approach would focus on process design and distance the regulator from judgements about content. The regulator should not, say Carnegie, be taking decisions about user complaints in individual cases.

But risk is not a self-contained concept. We have to ask: ‘risk of what?’ If the only answer provided is ‘harm’, and harm is left undefined, we are back to someone having to decide what counts as harmful. Only then can we measure risk. How can we do that without considering types of content? How can the regulator measure the effectiveness of intermediaries’ harm reduction measures without knowing what kind of content is harmful?

What Carnegie has previously said about measuring harm reduction suggests that the regulator would indeed have to decide which types of content are to be regarded as harmful: “…what is measured is the incidence of artefacts that – according to the code drawn up by the regulator – are deemed as likely to be harmful...”. By “artefacts” Carnegie means “types of content, aspects of the system (e.g. the way the recommender algorithm works) and any other factors”. (emphases added)

ES: What is the harm reduction cycle?

SL: This is the core of the Carnegie model. It envisages a repeated process of adjusting the design of intermediaries’ processes to squeeze harm out of the system. 

Carnegie has said: “Everything that happens on a social media or messaging service is a result of corporate decisions: …” From this perspective user behaviour is a function of the intermediaries’ processes - processes which can be harnessed by means of the duty of care to influence, nudge – or even fundamentally change - user behaviour.

Carnegie talk of a “Virtuous circle of harm reduction on social media and other internet platforms. Repeat this cycle in perpetuity or until behaviours have fundamentally changed and harm is designed out.”

There is obviously a question lurking there about the degree to which users control their own behaviour, versus the degree to which they are passive instruments of the platforms that they use to communicate.

ES: I sense an incipient rant.

SL: Mmm. The harm reduction cycle does prompt a vision of social media users as a load of clothes in a washing machine: select the programme marked harm reduction, keep cycling the process, rinse and repeat until harm is cleaned out of the system and a sparkling new internet emerges.

But really, internet users are not a bundle of soiled clothes to be fed into a regulator’s programmed cleaning cycle; and their words are not dirty water to be pumped out through the waste hose.

Internet users are human beings who make decisions – good, bad and indifferent - about what to say and do online; they are autonomous, not automatons. It verges on an affront to human dignity to design regulation as if people are not the cussed independent creatures that we know they are, but human clay to be remoulded by the regulator’s iterative harm reduction algorithm. We did not free ourselves from technology Utopianism only to layer technocratic policy Utopianism on top of it.

ES. Can system be divorced from content?

SL. It is difficult to see how, say, a recommender algorithm can be divorced from the kind of content recommended, unless we think of recommending content as likely to be harmful per se. Should we view echo chambers as intrinsically harmful regardless of the content involved? Some may take that view, perhaps the future regulator among them. Whether it is appropriate for a regulator to be empowered to make that kind of judgement is another matter.

Realistically, a duty of care can hardly avoid being about - at least for the most part - what steps the intermediary has to take in respect of what kinds of content. The blue pencil - even in the guise of a broad brush wielded by intermediaries according to Codes of Practice arranged by process – would rest in the ultimate hand of our digital Lord Chamberlain.

ES. Surely there will be some limits on what the regulator can do?

SL. The White Paper excluded some harms, including those suffered by organisations.  The Carnegie draft Bill does not. It does contain a hard brake on the regulator’s power: it must not require service providers to carry out general monitoring. That is intended to comply with Article 15 of the Electronic Commerce Directive.

The government has also said there is no intention to include the press. The Carnegie draft Bill attempts a translation of that into statutory wording.

ES. Are there any other brakes?

SL. The regulator will of course be bound by human rights principles. Those are soft rather than hard brakes: less in the nature of red lines, more a series of hurdles to be surmounted in order to justify the interference with the right – if the right is engaged in the first place.

The fact that a right is engaged does not mean, under European human rights law, that the interference is always a violation. It does mean that the interference has to be prescribed by law and justified as necessary and proportionate.

ES.  Would a duty of care engage freedom of expression?

SL. The speech of individual end users is liable to be suppressed, inhibited or interfered with as a consequence of the duty of care placed on intermediaries. Since the intermediaries would be acting under legal compulsion (even if they have some choice about how to comply), such interference is the result of state action. That should engage the freedom of speech rights of end users. As individual end users’ speech is affected it doesn’t matter whether corporate intermediaries themselves have freedom of expression rights.

The right is engaged whether the interference is at the stage of content creation, dissemination, user engagement or moderation/deletion. These are the four points that Carnegie identifies in a paper that discusses the compatibility of its draft Bill with fundamental freedoms.

Carnegie contemplates that tools made available to users, such as image manipulation and filters, could be in scope of the duty of care. It suggests that it is at least arguable that state control over access to creation controls (paint, video cameras and such like) could be seen as an interference. Similarly for a prohibition on deepfake tools, if they were held to be problematic.  

Carnegie suggests that it is a difficult question whether, if we have been nudged in one direction, it is an interference to counter-nudge. However, the question is surely not whether counter-nudging required under a duty of care is an interference that engages the right of freedom of expression (it is hard to see how it could not be), but whether the interference can be justified.

ES. What if intermediaries had only to reduce the speed at which users’ speech circulates, or limit amplification by tools such as likes and retweets?

SL. Still an interference. People like to say that freedom of speech is not freedom of reach, but that is just a slogan. If the state interferes with the means by which speech is disseminated or amplified, it engages the right of freedom of expression. Confiscating a speaker’s megaphone at a political rally is an obvious example. Cutting off someone’s internet connection is another. The Supreme Court of India said earlier this month, in a judgment about the Kashmir internet shutdown:
“There is no dispute that freedom of speech and expression includes the right to disseminate information to as wide a section of the population as is possible. The wider range of circulation of information or its greater impact cannot restrict the content of the right nor can it justify its denial."

Seizing a printing press is not exempted from interference because the publisher has the alternative of handwriting. Freedom of speech is not just freedom to whisper.

ES: So you can’t just redefine a right so as to avoid justifying the interference?

SL. Exactly. It is dangerous to elide the scope of the right and justified interference; and it is easy to slip into that kind of shorthand: “Freedom X does not amount to the right to do Y.” The right to do Y is almost always a question of whether it is justifiable to prevent Y in particular factual circumstances, not whether Y is generally excluded from protection.

The Carnegie fundamental freedoms paper says: “Requiring platforms to think about the factors they take into account when prioritising content (and the side effects of that) is unlikely to engage a user’s rights; it is trite to say that freedom of speech does not equate to the right to maximum reach”. However it is one thing to justify intervening so as to reduce reach. It is quite another to argue that the user’s freedom of speech rights are not engaged at all.

ES. So the regulator could justify restricting the provision or use of amplification tools?

SL. If the interference is necessary in pursuance of a legitimate public policy aim, and is proportionate to that aim, then it can be justified. Proportionality involves a balancing exercise with other rights, and consideration of safeguards against abuse. The more serious the interference, the greater the justification required.

ES. That sounds a bit vague.

SL. Yes. It gets more so when we add the notion that the state may have a positive obligation to take action to secure a right. That commonly occurs with rights that may conflict with freedom of expression, such as privacy or freedom of religion.  We can even end up arguing that the state is obliged to inhibit some people’s speech in order to secure the genuine, effective right of others to enjoy their own right of expression; to ensure that the right is not illusory; or to secure the availability of a broad range of information.

ES. The state is required to destroy the village in order to save it?

SL.  Not quite, but increasingly the coercive powers of the state are regarded as the means of securing freedom of expression rather than as a threat to it. 

So Carnegie questions whether removing a retweet facility is really a violation of users' rights to formulate their own opinion and express their views, or rather - to the contrary - a mechanism to support those rights by slowing them down so that they can better appreciate content, especially as regards onward sharing.

The danger with conceptualising fundamental rights as a collection of virtuous swords jostling for position in the state’s armoury is that we lose focus on their core role as a set of shields creating a defensive line against the excesses and abuse of state power.

ES. So can we rely on fundamental rights to preserve the village?

SL. The problem is that there are so few red lines. The village can start to sink into a quagmire of competing rights that must be balanced with each other.

The Carnegie fundamental freedoms paper well illustrates the issue. It is a veritable morass of juxtaposed rights, culminating with the inevitable, but not greatly illuminating, conclusion that interferences may be justified if a fair balance is maintained between conflicting rights, and suggesting that the state may have positive obligations to intervene. That is not a criticism of the paper. It is how European human rights law has developed.

ES. You mentioned the ‘prescribed by law’ requirement. How does that apply?

SL. ‘Prescribed by law’ is the first step in assessing Convention compatibility of an interference. If you don’t pass that step you don’t move on to legitimate objective, necessity and proportionality. Prescribed by law means not just issued by the legislature and publicly accessible, but having the quality of law.

Quality of law is an articulation of the rule of law. Restrictions must be framed sufficiently clearly and precisely that someone can, in advance, know with reasonable certainty whether their conduct is liable to have legal consequences. In the context of the duty of care, the legal consequence is that a user’s speech is liable to be the subject of preventive, inhibiting or other action by a platform operator carrying out its duty.

Quality of law is a particular concern for discretionary state powers, which by their nature are liable to be exercised ad hoc and arbitrarily in the absence of clear rules.

ES. Discretionary power would be relevant to the regulator, then.

SL. Yes. As we have seen, the proposed regulator would have very wide discretion to decide what constitutes harm, and then what steps intermediaries should take to reduce or prevent risk of harm. You can build in consultation and 'have regard' obligations, but that doesn't change the nature of the task.

ES. Couldn’t the quality of law gap be filled by the regulator’s Codes of Practice?

SL. It is certainly possible for non-statutory material such as Codes of Practice to be taken into account in assessing certainty and precision of the law. Some would say that the ability of the regulator to fill the gap simply highlights the extent to which the draft Bill would delegate power over individual speech to the regulator. 

ES. Isn’t that a doctrinaire rather than a legal objection?

SL. Partly so. However, in the US ACLU v Reno case in the early 1990s the US courts took a medium-specific approach, holding that the kinds of restriction that might be justified for broadcast were not so for individual speech on the internet. So the appropriateness of the very model of discretionary broadcast regulation could come into question when applied to individual speech. It bears repeating that the duty of care affects the speech of individuals.

ES. Carnegie says that its emerging model would ‘champion the rule of law online’.

SL. The essence of the rule of law is “the restriction of the arbitrary exercise of power by subordinating it to well-defined and established laws” (OED). That is characteristic of the general law, but not of discretionary regulation.

If anything, regulation by regulator presents a challenge to the rule of law. The raison d’etre is to maximise the power, flexibility and discretion of the regulator.  Some may think that is a good approach. But good approach or bad, champion the rule of law it does not.

ES. Thank you.



No comments:

Post a Comment

Note: only a member of this blog may post a comment.