Wednesday, 13 December 2017

Cyberleagle Christmas Quiz

15 questions to illuminate the festive season. Answers in the New Year. (Remember that this is an English law blog). 

Tech teasers 

1. How many data definitions does the Investigatory Powers Act 2016 (IP Act) contain?

2. A technical capability notice (TCN) under the IP Act could prevent a message service from providing end to end encryption to its users. True, False or Maybe?

3. Under the IP Act a TCN requiring installation of a permanent equipment interference capability could be served on a telecommunications operator but not a device manufacturer. True, False or Maybe?

4. Who made a hash of a hashtag?


Brave new world


5. Who marked the new era of post-Snowden transparency by holding a private stakeholder-only consultation on a potentially contentious IP Act draft Statutory Instrument?

6. Who received an early lesson in the independence of the new Investigatory Powers Commissioner?



The penumbra of ECJ jurisdiction
  
7. The EU Court of Justice (CJEU) judgment in Watson/Tele2 was issued 22 days after the IP Act received Royal Assent. How long elapsed before the Home Office published proposals to amend the Act to take account of the decision?

8. The Investigatory Powers Tribunal has recently made a referral to the CJEU. What is the main question that the CJEU will have to answer about the scope of its Watson decision?  
 
9. What change was made in the IP Act’s bulk powers, compared with S.8(4) RIPA, that would render the CJEU’s Q.8 answer especially significant?

10. After Brexit we won't need to worry about CJEU surveillance judgments, even if we exit the EU with no deal. True, False or Maybe? 


Copyright offline and online

11. Tweeting a link to infringing material is itself an infringement of copyright. True, False or Maybe?  
 
12. Reading an infringing copy of a paper book is not copyright infringement. Viewing an infringing copy online is. True, False or Maybe?
 
13. Whereas selling a set-top box equipped with PVR facilities is legal, providing a cloud-based remote PVR service infringes copyright. True, False or Maybe?

14. Format-shifting infringes copyright. True, False or Maybe?

15. Illegal downloading is a crime. True, False or Maybe?


 

Tuesday, 14 November 2017

Electronic wills: an idea whose time has yet to come?

Over the last four months the Law Commission of England and Wales has been consulting on the topic of Making a Will, focusing on testamentary capacity and formalities.  Chapter 6 of the Consultation is about Electronic Wills. This is my submission on that topic, from the perspective of a tech lawyer who knows little of the law of wills but has grappled many times with the interaction of electronic transactions and formalities requirements.

Introductory Remarks

Overview
1. The question at the core of Chapter 6 of the Consultation is how to give effect to testamentary intentions in an increasingly electronic environment. This has at least five aspects, which inevitably conflict with each other to an extent:
  • Providing a reasonable degree of certainty that the testator intended the document in question to have the significant legal effects of a will. This is achieved by requiring a degree of formality and solemnity.
  • Ensuring that formalities do not act as a deterrent to putative testators whether through complexity, cost, consumption of time or uncertainty as to how to achieve compliance.
  • Minimising the risk of a testator’s intentions being defeated by an actual failure to comply with formalities or an inability to demonstrate that formalities were in fact complied with.
  • Providing protection against fraud, tampering and forgery, either of the body of the document or of the signature(s) appended to it.
  • Providing for all the above over the potentially long period of time between execution of the will and its being admitted to probate.
2. The tensions between these requirements necessitate a balance to be drawn that will not perfectly satisfy any of them, as is the case with the current regime designed for an offline environment.

Signatures versus other formalities

3. Although the focus of electronic transactions regimes tends to be on signatures, signatures should not be addressed in isolation from other relevant formalities[1]. As the Consultation Paper recognises, there is interaction and dependency between signature, form, medium and process. Although the Consultation Paper does not categorise them as such, for wills formalities of all four kinds exist:
  • Signature: the need for signatures and the (possible) requirement that the signature be handwritten (Consultation Paper 6.20 to 6.30)
  • Form: the caselaw requirement for an attestation clause if a strong presumption of due execution is to arise (Consultation Paper 5.11 to 5.12; confusion around the witness attestation requirement is addressed elsewhere in the Consultation paper.)
  • Medium: the requirement that the will be in writing (Consultation Paper 6.15 to 6.19)
  • Process: the presence and simultaneity requirements for witnessing (Consultation Paper 6.32); and the practical filing requirements for admission to probate (6.97).
4. However the Consultation Paper is not always convincing about the relative importance of these formalities.  Thus in bringing home to the testator the seriousness of the transaction, the ceremony of gathering two witnesses in the same room simultaneously to witness the testator’s signature would seem likely to be more significant than whether or not the signature is handwritten (cf Consultation Paper 6.48, 6.64). If it had to be done in the presence of two witnesses appending a signature to an electronic document using (for instance) a tablet would surely be no less a matter of consequence than applying a handwritten signature to a paper document.

5. The overall purpose of giving effect to the testator’s intention where electronic methods are involved may be achievable by an appropriate combination of all four kinds of formality. Not all (or even most of) the heavy lifting has necessarily to be done by the signature itself, any more than with a traditional paper will.

The function of a signature

6. The function of a signature is generally threefold: (1) to indicate assent to or an intention to be bound by the contents of the document, (2) to identify the person so doing and (3) to authenticate the document. (There are variations on these functions. For instance the signature of a witness does not indicate assent or an intention to be bound, but instead is intended to verify the signature of the party to the document.)

7. The difference between the identification and authentication functions can be seen if we consider the different kinds of repudiation that may occur. Identification protects against the claim: ‘That is not my (or X’s) signature’.  Authentication protects against the claim: ‘That is my (or X’s) signature, but that is not the document that I (or X) signed’.

Strengths and weaknesses of electronic signatures

8. As the Consultation Paper notes, ordinary electronic signatures (typed names, copied scans) are poor identifiers and authenticators. Nevertheless English law, in keeping with its historically liberal attitude to formalities requirements generally, rightly regards such signatures as adequate in most cases in which a signature is required by statute. Manuscript signatures are better, but not perfect, identifiers and authenticators. A properly formed manuscript signature is better than a mark, but both are valid.

9. At the other end of the scale of sophistication, certificate-based digital signatures are very good (far better than manuscript signatures) at authenticating the signed document.  However they remain relatively poor at assuring the identity of the person who applied the digital signature. This is because however sophisticated may be the signature technology, access to the signature creation device will (in the absence of a biometric link) be secured by a password, a PIN, or something similar. As the Consultation Paper rightly points out these are weak forms of assurance (Consultation Paper 6.60 to 6.68). This aspect can be improved by adopting methods such as two factor authentication of the user. It may or may not be apparent after the event whether such a technique was used.

Common traps in legislating for electronic transactions

Over-engineering and over-estimating the reliability of non-electronic systems

10. The Consultation Paper refers to the apparently stillborn attempt to legislate for electronic wills in Nevada. I am not familiar with the particular legislation in question, but will offer some general comments about the temptation for legislation to impose over-engineered technical solutions.

11. Over-engineering is a natural consequence of over-estimating the reliability of non-electronic systems and thus, in the name of equivalence, attempting to design in a level of assurance for the electronic system that does not exist in the non-electronic sphere.  As the Australian Electronic Commerce Expert Group stated in its 1998 Report to the Attorney-General[2]:
“There is always the temptation, in dealing with the law as it relates to unfamiliar and new technologies to set the standards required of a new technology higher than those which currently apply to paper and to overlook the weaknesses that we know to inhere in the familiar.”
12. Over-engineering occurred in the early days of digital signatures, when complex statutes were passed in some jurisdictions (the Utah Digital Signatures Act being the earliest and best known example) in effect prescribing the use of PKI digital signatures in an attempt to achieve a guarantee of non-repudiation far beyond that provided by manuscript signatures. These kinds of rules were found to be unnecessary for everyday purposes and have tended to be superseded by facilitative legislation such as the US ESign Act.

Over-technical formalities requirements

13. Over-technical formalities requirements are a potential source of concern. This is for two reasons. 

14. First, they increase the chance that a putative testator or a witness will make an error in trying to comply with them. As the Sixth Interim Report of the Law Revision Committee said in 1937 in relation to the Statute of Frauds:
" 'The Act', in the words of Lord Campbell . . . 'promotes more frauds than it prevents'. True it shuts out perjury; but it also and more frequently shuts out the truth. It strikes impartially at the perjurer and at the honest man who has omitted a precaution, sealing the lips of both. Mr Justice FitzJames Stephen ... went so far as to assert that 'in the vast majority of cases its operation is simply to enable a man to break a promise with impunity, because he did not write it down with sufficient formality.’ " 
15. Second, a person attempting to satisfy the formalities requirements must be able to understand how to comply with them without resort to expert technical assistance, and to be confident that they have in fact complied. A formalities specification that requires the assistance of an IT expert to understand it will deter people from using the procedure and increase the incidence of disputes for those who do so. Injustice will be caused if the courts are filled with disputes about whether the right kind of electronic signature has been used and where there is no real doubt about the identity of the testator and the authenticity of the will.

Over-technology-specific

16. As a general rule technology-neutral legislation is preferable to technology-specific legislation.

17. This is for two reasons. First, technology-specific legislation can be overtaken by technological developments, with the result either that it is uncertain whether a new technology complies with the requirements, or that the legislation may clearly exclude the new technology even though functionally it performs as well or better than the old technology. Second, technology-specific legislation tends to lock in particular technology vendors rather than opening the market to all whose offerings are able to provide the required functionality (cf Consultation paper 6.36 and 6.37).

18. Against that, however, is the concern that if legislation is drafted at a very high level of abstraction in order to accommodate possible future technologies, it carries the price of uncertainty as to whether any given technology does or does not comply with the formalities requirements. That is most undesirable, for the reasons set out above.

19. Reconciling these opposing considerations is no easy task. Indeed it may be impossible to achieve a wholly satisfactory resolution. Nevertheless the competing considerations should be recognised and addressed.

Validity versus evidence

20. Validity and evidence have to be considered separately. Validity is not a matter of evidential value. Whilst the overall purpose of a formality requirement may be to maximise evidential value and to deter fraud (cf Lim v Thompson), the formality requirement itself stands separate as a rule of validity. 

Commentary on Chapter 6 of Consultation Paper

21. In the light of the introductory discussion above I offer the following comments on some aspects of Chapter 6. I will start with Enabling Electronic Wills (6.33 to 6.43), since that contains some of the most fundamental discussion.

Enabling Electronic Wills (6.33 to 6.43)

6.34 ‘It is highly likely that their use will become commonplace in the future’.

22. Since ‘the future’ is an indeterminate period this is probably a reasonably safe prediction. However, with apologies to Victor Hugo, there is nothing as feeble as an idea whose time has yet to come.

23. Science fiction films from the 1950s and 1960s routinely showed video communicators – an idea that stubbornly refused to take off for another 50 years. Even now video tends to be used for the occasions when seeing the other person is an actual benefit rather than a hindrance – special family occasions, business conferencing, intimate private exchanges for example.

24. Electronic wills have something of that flavour: possible in principle, but why do it when paper has so many advantages: 
  • (Reasonably) Permanent
  • Cheap
  • (Reasonably) secure
  • (Reasonably) private
  • Serious (ceremonial)
  • (Relatively) simple to comply with
25. By contrast electronic wills, as technology currently stands, would be inherently:
  • Impermanent
  • Costly
  • Insecure
  • Less private
  • Casual
  • Complicated to comply with
26. We cannot exclude the possibility that the effort and expense required to overcome, or at least mitigate, these disadvantages may at the present time be out of proportion to the likely benefit. It is perhaps no surprise that stakeholders report little appetite for electronic wills. We should beware the temptation to force the premature take-up of electronic wills simply because of a perception that everything should be capable of being done electronically.    

27. Whilst predictions in this field are foolish, one way in which technology might enable electronic wills in the future is the development (perhaps from existing nascent e-paper technologies) of cheap durable single-use tablets on which an electronic document and accompanying testator and witness signature details could be permanently inscribed and viewed electronically.

28. This is not to say that legislation should not be re-framed now to facilitate the development of appropriate forms of electronic will. Ideally such legislation should capture the essential characteristics of the desired will-making formalities in a technology-neutral but understandable way, rather than prescribe or enable the prescription of detailed systems. In theory it would not even matter if currently there is no technology that can comply with those characteristics electronically.  Such legislation would allow for the possible future development of as yet unknown compliant technologies.

29. However as already discussed, achieving that aim while at the same time leaving a putative testator with no room for doubt about whether a particular technology does or does not satisfy the requirements of the law is not an easy task. It is also pertinent to consider how the presumption of due execution might apply in an electronic context. With paper the presumption arises from matters apparent on the face of the will (Consultation Paper, 5.11). The more technical and complex the formalities requirements for an electronic will, the less will it be possible for compliance with those formalities to be apparent on the face of the document.

6.34 ‘We have focused on electronic signatures’

30. As already indicated, to focus on electronic signatures to the exclusion of the other relevant formalities is, I would suggest, an invitation to error. In reality the Consultation Paper does, of necessity, refer to the other formalities. However it would be preferable explicitly to recognise the interdependence of the four categories of formality and to consider them as a coherent whole.

6.35 ‘First and most importantly, electronic signatures must be secure’

31. This, it seems to me, risks falling into the related traps of over-engineering and of over-estimating the reliability of non-electronic systems (see [10] above).

32. Nor am I sure that the paragraph adequately separates the three functions of a signature discussed above: assent to terms/intention to be bound, identification and authentication.

33. The statement that an electronic signature must provide “strong evidence that a testator meant formally to endorse the relevant document” elides all three functions. The next sentence “electronic signatures must reliably link a signed will to the person who is purported to have signed it” elides the second and third functions. We then have the statement “Handwritten signatures perform this function well”. It is unclear which function or functions are being referred to. Handwritten signatures do not perform each function equally well.

34. It is true that a (genuine) handwritten signature, buttressed by the surrounding formality of double witnessing, is strong evidence of intention to be bound.

35. A well-formed handwritten signature (a ‘distinctive mark’, in the words of the Consultation Paper) provides reasonably strong evidence of identity, assuming that comparison handwriting can be found (something not required by the Wills Act and so more in the nature of a working assumption - cf para 6.53 of the Consultation paper). A mark (which is permissible under the Wills Act) does not do so. The witnesses (if available) are also relevant to proof of identity.

36. Parenthetically, one wonders whether the evidential weight assumed to be provided by signatures may have changed over the period since the enactment of the Wills Act 1837. The use of marks may have been more widespread than today and forensic techniques must have been less advanced. Do we now attribute greater reassurance to the use of a handwritten signature than was originally the case?  At any event, given the wide degree of latitude allowed to the form of a handwritten signature the degree of assurance cannot be regarded as uniform across all handwritten signatures.

37. A handwritten signature is weak evidence of linkage to the document. The signature is present only on the page on which it appears. Proof of the integrity of the whole document (if required) would depend on factors that have nothing to do with the signature (e.g. analysis of the paper and typescript ink).

38. Manuscript signatures provide a degree of evidential value for some relevant facts, but they are by no means perfect. It is of course true that a typed signature is of less evidential value than most manuscript signatures. Conversely, as discussed above ([9]) even the most sophisticated electronic signature is only as secure as its weakest link: the password or PIN (or combinations of such), or other mechanisms, that the testator has used to protect the signature key.

39. Notwithstanding its common usage I would tend to avoid the use of the word ‘secure’ in relation to electronic signatures without making clear which function or functions of a signature are being referred to and what precisely is meant, in that context, by ‘secure’.

40. Eliding the related roles of signatures and other formalities is apt to cause unnecessary confusion and, I would suggest, risks unintentionally placing too much of the formalities burden on the electronic signature.

6.35 ‘We have worked on the basis that electronic signatures should be no less secure than handwritten signatures’

41. On the face of it this is unexceptional. However, on closer inspection it suffers in two respects.

42. The first, already mentioned, comes from considering the signature in isolation from other formalities. In principle an electronic signature could permissibly be less secure than a manuscript signature if other formalities were sufficiently strong to compensate. For instance (without necessarily recommending this) the view could be taken that a notarised typewritten electronic signature would be acceptable (if a satisfactory way of notarising electronic documents had been found). The electronic signature itself would be less secure than the manuscript signature, but the combination of formalities could be adequate. Use of a notary instead of witnesses would avoid the authorisation problem identified at Consultation Paper 6.84.

43. The second is that when we break down the functions of the signature, as above [6], then factor in the variations in ‘security’ provided by the range of permissible handwritten signatures, it is quite unclear what is meant by the level of ‘security’ of a handwritten signature.  The temptation (see [11] above) is to over-estimate the security of a handwritten signature when making a comparison of this kind.

6.35 ‘It is essential that a legal mechanism exists for determining which electronic signatures are sufficiently secure, and which are not.’

44. Security (whatever may be meant by that in context) is one aspect of an electronic signature. Given what I have said above about the respective merits of technology-neutral and technology-specific legislation, it is probably inevitable that if the electronic signature itself is to bear any of the formalities burden, there will have to be some definition of which kinds of signature qualify and which do not. This is, however, a potential minefield.  It is almost impossible to define different kinds of signatures at any level of generality in a way that enables a lay person to understand, or that enables an IT expert to say with certainty, what qualifies and what doesn’t. One only has to look at eiDAS and the Electronic Signatures Directive before it to appreciate that.  The ability to be certain that one has complied with the necessary formalities of making a will is surely a sine qua non.

45. At risk of over-repetition it is the whole bundle of formalities, not just the signature, that requires a clear set of legal rules for the electronic environment.

6.36 'There is a risk that narrowly specifying types of valid electronic will could be counterproductive.'

46. Agreed. However see comments above ([16] to [19]) regarding the difficulties of drawing a viable balance between technology-specific and technology-neutral. Also, it is possible (although I have not investigated the matter) that the problem with the existing attempts mentioned in the Consultation might have been over-engineering rather than technology-specificity. Although the two often go hand in hand and over-engineering is always technology-specific, the converse is not necessarily true. A requirement of paper is technology-specific, but not over-engineered.

6.38

47. If the principles of clear and understandable requirements for all relevant formalities are adhered to, it ought to follow that any technical method that complies with those formalities is permissible. If all that is being said here is that the requirements must not be so abstract as to create uncertainty as to what does and does not comply, that must be correct (see above [18]).

48. If perhaps this paragraph is recognising that formalities other than the signature itself are relevant, then I would endorse that (see above [3]). Even so this paragraph appears to treat the other formalities as something of an afterthought. This is in my view not a good approach. The better approach is to treat all the formalities as a coherent, interdependent whole.

49. If the last sentence is saying that the law should set out a clear set of formalities for electronic wills, that is one thing. If it is suggesting the establishment of some kind of regulatory body to oversee will-making, that is another matter. Similarly it is unclear what is intended by the reference in 6.39 to ‘regulating’ electronic wills.

6.39 and 6.40

50. See comments on 6.45 below.

6.41

51. Witnessing requirements are one of the related formalities discussed above ([3]). Again, however, I believe it is an error to view witnessing requirements as a secondary issue, to be considered consequentially upon the introduction of electronic signatures. The formalities should be approached as a coherent, interdependent whole.

6.42

52. This paragraph reinforces my view expressed in the previous paragraph.  Whilst it is correct that the suitability of any particular method of witnessing would depend on precisely how a will is to be electronically signed, it seems to me unhelpful to exclude altogether the possibility of dispensing with witnessing as traditionally understood.   

53. For instance, in the hypothetical notarisation example given above [42] (see also [67]) there would be no need for separate witnessing. For a certificated digital signature there might be an argument that the certification authority could substitute for some (but not necessarily all) of the functions of a witness (although the points raised by Stephen Mason and Nicholas Bohm in their submission dated 14 August 2017 regarding long term assurance are well made). 

Uncertainty in the current law

6.45 (Consultation Question 31)

54. I suggest the Law Commission should consider whether some limited kinds of electronic signature in conjunction with appropriately crafted form, medium and process formalities should be permissible under the Wills Act, coupled if appropriate with an enabling power for future extensions.

Electronic Signatures – methods and challenges (6.46 to 6.87)

55. I have read the submission of Stephen Mason and Nicholas Bohm dated 14 August 2017.  I will not repeat what they say about this section of the Consultation since I agree with much of it. In particular I support paragraphs 24 to 29, 31 to 33 and 36 of their submission.  In addition I have the following comments.

6.46 'methods such as passwords are considered to be signatures'

56. Like any other method a password can serve as a signature only if there is intent thereby to authenticate the document and assent to or be bound by its terms.  While it may be the case that a password can therefore serve as a signature (see e.g. Bassano v Toft [2014] EWHC 377 (QB), where clicking on an ‘I Accept’ button was held to be a signature), it would seem likely to be a minority of cases in which it would do so. Most passwords are not used as signatures. It would seem debatable whether a password or PIN used to access a signature device or method is itself used as a signature, particularly where further steps are required before the signature is applied to the document.

6.49 'we expect viable electronic signatures to have similar or better value'

57. See comments at [41] to [43] above.

6.52 ‘A high risk of fraud’

58. This might be better described as a high vulnerability to fraud.

6.52 et seq

59. See comments above regarding the three functions of a signature ([6] to [7]) and the need to consider the usefulness (or otherwise) of a handwritten signature separately in relation to each function ([33] to [35]).

6.55 'photocopied writing does not allow for full consideration of all the attributes of writing'

60. True, but if photocopied writing is all that is available (for instance if the original will has been lost) the will is not as such invalidated. It can in principle be admitted to probate on provision of appropriate evidence. If it is disputed an expert would presumably have to do his or her best with what is available.

6.57

61. The first sentence is indisputable. However the second sentence does not follow. Other formalities could be adopted to compensate for the ‘insecurity’ of an ordinary electronic signature.

6.58 and 6.59

62. The fact that marks are permissible and that wills executed in that way can be verified in a different way from forensic examination (by extrinsic evidence) are pointers to how things could be done, rather than an anomaly to be disregarded. Whilst concerns about deluging the Probate Service with extrinsic evidence are understandable, that risk could be mitigated by introducing more stringent surrounding formalities where an ordinary electronic signature is used. 

Electronic signatures and eIDAS

6.24

63. eIDAS, as an EU Regulation, has direct effect in the UK. The 2016 Regulations are in addition and also make consequential amendments to existing UK legislation.

6.26

64. Article 2(3) would enable requirements of form to be applied. However it is not clear to me that eIDAS is limited to the commercial and transactional context. Electronic identification schemes (Consultation Paper, fn 20) are unrelated to electronic signatures.

6.28

65. See Mason and Bohm submission at paras 24 to 26 as to the apparent technical misunderstanding regarding the need for a counterparty.

6.30

66. Whatever is done must nevertheless comply with eIDAS (although this may be overtaken by Brexit).

Some illustrative scenarios for possible consideration

These scenarios are put forward to illustrate how consideration of the four kinds of formality as a coherent whole could lead to different approaches from focusing on electronic signatures as the primary concern. They do not pretend to be fully worked out proposals.

Ordinary electronic signature plus notarisation


Signature

Form

Medium

Process

Any electronic signature is permissible

Signature plus notarisation (no witness required)

Durable medium?

E-notarisation (if available)

67. The advantage of such a process would be that the security, seriousness and ceremonial aspect provided by witnessing would be retained, while not placing on the testator the burden of understanding or implementing secure digital signature systems. That burden would fall on the notary, who as a professional provider of notary services would be well placed to make the necessary investment of time and money in training and in acquiring suitable equipment.

68. The disadvantage compared with the traditional witnessing process is the need to find a professional notary, who would charge a fee for the notarisation. However, in the context of enabling a new process as an optional alternative to the traditional one, that may be acceptable.  

69. I am not aware of whether UK notaries yet offer full e-notarisation services as is done in at least some states in the USA (see Mason and Bohm submission para 52). It may be that legislation would be required to enable that. However since the essence of notarisation is that the notary makes checks on identity in relation to formal document signing, this would seem to be an option worth exploring.

Electronic in-presence signature and witnessing


Signature

Form

Medium

Process

Qualified electronic signature? (testator and witnesses)

As present

Durable medium?

As present. Witnesses would observe testator applying signature to document on screen, then do the same.

70. This method would avoid the need to find and pay a professional notary but is more challenging for the testator and witnesses, each of whom would have to equip themselves with a signature device capable of applying (say) a qualified electronic signature.

71. The eIDAS regime ought in principle to assure that the device does apply a conformant signature, assuming that the relevant providers are on an EU trusted providers register.  However in practice this may not be something that a lay person can be completely confident about. There may also remain challenges as to how to establish, perhaps many years later, that the signature was indeed a QES and as to how the document and any associated record of the method used to sign it should be stored (cf Mason and Bohm submission). The effects of Brexit on reliance on the eIDAS regime would also have to be considered.




[1] See further my article Can I use an electronic signature? DigitalBusiness.law, 12 May 2017 (http://digitalbusiness.law/2017/05/can-i-use-an-electronic-signature/).
[2] Electronic Commerce: Building the Legal Framework, March 31, 1998.

For further background see my article ‘Legislating for Electronic Transactions’ (Computer and Telecommunications Law Review, 2007 C.T.L.R. 41).




Wednesday, 25 October 2017

Towards a filtered internet: the European Commission’s automated prior restraint machine

The European Commission recently published a Communication on Tackling Illegal Content Online.  Its subtitle, Towards an enhanced responsibility of online platforms, summarises the theme: persuading online intermediaries, specifically social media platforms, to take on the job of policing illegal content posted by their users.

The Commission wants the platforms to perform eight main functions (my selection and emphasis):

  1. Online platforms should be able to take swift decisions on action about illegal content without a court order or administrative decision, especially where notified by a law enforcement authority. (Communication, para 3.1)
  2. Platforms should prioritise removal in response to notices received from law enforcement bodies and other public or private sector 'trusted flaggers'. (Communication, para 4.1)
  3. Fully automated removal should be applied where the circumstances leave little doubt about the illegality of the material (such as where the removal is notified by law enforcement authorities). (Communication, para 4.1)
  4. In a limited number of cases platforms may remove content notified by trusted flaggers without verifying legality themselves. (Communication, para 3.2.1)
  5. Platforms should not limit themselves to reacting to notices but adopt effective proactive measures to detect and remove illegal content. (Communication, para 3.3.1)
  6. Platforms should take measures (such as account suspension or termination) which dissuade users from repeatedly uploading illegal content of the same nature. (Communication, para 5.1)
  7. Platforms are strongly encouraged to use fingerprinting tools to filter out content that has already been identified and assessed as illegal. (Communication, para 5.2)
  8. Platforms should report to law enforcement authorities whenever they are made aware of or encounter evidence of criminal or other offences. (Communication, para 4.1)

It can be seen that the Communication does not stop with policing content. The Commission wants the platforms to act as detective, informant, arresting officer, prosecution, defence, judge, jury and prison warder: everything from sniffing out content and deciding whether it is illegal to locking the impugned material away from public view and making sure the cell door stays shut. When platforms aren’t doing the detective work themselves they are expected to remove users’ posts in response to a corps of ‘trusted flaggers’, sometimes without reviewing the alleged illegality themselves. None of this with a real judge or jury in sight. 

In May 2014 the EU Council adopted its Human Rights Guidelines on Freedom of Expression Online and OfflineThe Guidelines say something about the role of online intermediaries.  Paragraph 34 states:
“The EU will … c) Raise awareness among judges, law enforcement officials, staff of human rights commissions and policymakers around the world of the need to promote international standards, including standards protecting intermediaries from the obligation of blocking Internet content without prior due process.” (emphasis added)
A leaked earlier draft of the Communication referenced the Guidelines. The reference was removed from the final version It would certainly have been embarrassing for the Communication to refer to that document. Far from “protecting intermediaries from the obligation of blocking Internet content without prior due process”, the premise of the Communication is that intermediaries should remove and filter content without prior due process.  The Commission has embraced the theory that platforms ought to act as gatekeepers rather than gateways, filtering the content that their users upload and read.

Article 15, where are you?

Not only is the Communication's approach inconsistent with the EU’s Freedom of Expression Guidelines, it challenges a longstanding and deeply embedded piece of EU law. Article 15 of the ECommerce Directive has been on the statute book for nearly 20 years. It prohibits Member States from imposing general monitoring obligations on online intermediaries. Yet the Communication says that online platforms should “adopt effective proactive measures to detect and remove illegal content online and not only limit themselves to reacting to notices which they receive”.  It “strongly encourages” online platforms to step up cooperation and investment in, and use of, automatic detection technologies.

A similar controversy around conflict with Article 15 has been stirred up by Article 13 of the Commission’s proposed Digital Single Market Copyright Directive, also in the context of filtering.

Although the measures that the Communication urges on platforms are voluntary (thus avoiding a direct clash with Article 15) that is more a matter of form than of substance. The velvet glove openly brandishes a knuckleduster: the explicit threat of legislation if the platforms do not co-operate.
“The Commission will continue exchanges and dialogues with online platforms and other relevant stakeholders. It will monitor progress and assess whether additional measures are needed, in order to ensure the swift and proactive detection and removal of illegal content online, including possible legislative measures to complement the existing regulatory framework. This work will be completed by May 2018.”
The platforms have been set their task and the big stick will be wielded if they don't fall into line. It is reminiscent of the UK government’s version of co-regulation: 
“Government defines the public policy objectives that need to be secured, but tasks industry to design and operate self-regulatory solutions and stands behind industry ready to take statutory action if necessary.” (e-commerce@its.best.uk, Cabinet Office 1999.)
So long as those harnessed to the task of implementing policy don’t kick over the traces, the uncertain business of persuading a democratically elected legislature to enact a law is avoided.

The Communication displays no enthusiasm for Article 15.  It devotes nearly two pages of close legal analysis to explaining why, in its view, adopting proactive measures would not deprive a platform of hosting protection under Article 14 of the Directive.  Article 15, in contrast, is mentioned in passing but not discussed.  

Such tepidity is the more noteworthy when the balance between fundamental rights reflected in Article 15 finds support in, for instance, the recent European Court of Human Rights judgment in Tamiz v Google ([84] and [85]). Article 15 is not something that can or should be lightly ignored or manoeuvred around.

For all its considerable superstructure - trusted flaggers, certificated standards, reversibility safeguards, transparency and the rest - the Communication lacks solid foundations. It has the air of a castle built on a chain of quicksands: presumed illegality, lack of prior due process at source, reversal of the presumption against prior restraint, assumptions that illegality is capable of precise computation, failure to grapple with differences in Member States' laws, and others.


Whatever may be the appropriate response to illegal content on the internet – and no one should pretend that this is an easy issue – it is hard to avoid the conclusion that the Communication is not it.

The Communication has already come
in for criticism (here, here, here, here and here). At risk of repeating points already well made, this post will take an issue by issue dive into its foundational weaknesses.

To aid this analysis I have listed 30 identifiable prescriptive elements of the Communication. They are annexed as the Communication's Action Points (my label). Citations in the post are to that list and to paragraph numbers in the Communication.

Index to Issues and Annex












Presumed illegal

Underlying much of the Communication’s approach to tackling illegal content is a presumption that, once accused, content is illegal until proven innocent. That can be found in:
  • its suggestion that content should be removed automatically on the say-so of certain trusted flaggers (Action Points 8 and 15, paras 3.2.1 and 4.1);
  • its emphasis on automated detection technologies (Action Point 14, para 3.3.2);
  • its greater reliance on corrective safeguards after removal than preventative safeguards before (paras 4.1, 4.3);
  • the suggestion that platforms’ performance should be judged by removal rates (the higher the better) (see More is better, faster is best below);
  • the suggestion that in difficult cases the platform should seek third party advice instead of giving the benefit of the doubt to the content (Action Point 17, para 4.1);
  • its encouragement of quasi-official databases of ‘known’ illegal content, but without a legally competent determination of illegality (Action Point 29, para 5.2). 

Taken together these add up to a presumption of illegality, implemented by prior restraint.

In one well known case a tweet provoked a criminal prosecution, resulting in a magistrates’ court conviction. Two years later the author was acquitted on appeal. Under the trusted flagger system the police could notify such a tweet to the platform at the outset with every expectation that it would be removed, perhaps automatically (Action Points 8 and 15, paras 3.2.1, 4.1).  A tweet ultimately found to be legal would most probably have been removed from public view, without any judicial order. 

Multiply that thousands of times, factor in that speedy removal will often be the end of the matter if no prosecution takes place, and we have prior permanent restraint on a grand scale.

Against that criticism it could be said that the proposed counter notice arrangements provide the opportunity to reverse errors and doubtful decisions.  However by that time the damage is done.  The default has shifted to presumed illegality, inertia takes over and many authors will simply shrug their shoulders and move on for the sake of a quiet life.  

If the author does object, the material would not automatically be reinstated. The Communication suggests that reinstatement should take place if the counter-notice provides reasonable grounds to consider that removed content is not illegal. The burden has shifted to the author to establish innocence.

If an author takes an autonomous decision to remove something that they have previously posted, that is their prerogative. No question of interference with freedom of speech arises.  But if the suppression results from a state-fostered system that institutionalises removal by default and requires the author to justify its reinstatement, there is interference with freedom of speech of both the author and those who would otherwise have been able to read the post. It is a classic chilling effect.

Presumed illegality does not feature in any set of freedom of speech principles, offline or online.  Quite the opposite. The traditional presumption against prior restraint, forged in the offline world, embodies the principle that accused speech should have the benefit of the doubt. It should be allowed to stay up until proved illegal. Even then, the appropriate remedy may only be damages or criminal sanction, not necessarily removal of the material from public view.  Only exceptionally should speech be withheld from public access pending an independent, fully considered, determination of legality with all due process.

Even in these days of interim privacy injunctions routinely outweighing freedom of expression, presumption against prior restraint remains the underlying principle. The European Court of Human Rights observed in Spycatcher that “the dangers inherent in prior restraints are such that they call for the most careful scrutiny on the part of the Court”.

In Mosley v UK the ECtHR added a gloss that prior restraints may be "more readily justified in cases which demonstrate no pressing need for immediate publication and in which there is no obvious contribution to a debate of general public interest". Nevertheless the starting point remains that the prior restraint requires case by case justification. All the more so for automated prior restraint on an industrial scale with no independent consideration of the merits.

Due process at source

A keystone of the Communication is the proposed system of 'trusted flaggers' who offer particular expertise in notifying the presence of potentially illegal content: “specialised entities with specific expertise in identifying illegal content, and dedicated structures for detecting and identifying such content online” (para 3.2.1)

Trusted flaggers “can be expected to bring their expertise and work with high quality standards, which should result in higher quality notices and faster take-downs” (para 3.2.1).  Platforms would be expected to fast-track notices from trusted flaggers. The Commission proposes to explore with industry the potential of standardised notification procedures.

Trusted flaggers would range from law enforcement bodies to copyright owners. The Communication names the Europol Internet Referral Unit for terrorist content and the INHOPE network of reporting hotlines for child sexual abuse material as trusted flaggers. It also suggests that civil society organisations or semi-public bodies are specialised in the reporting of illegal online racist and xenophobic material.

The emphasis is notably on practical expertise rather than on legal competence to determine illegality. This is especially significant for the proposed role of law enforcement authorities as trusted flaggers. The police detect crime, apply to court for arrest or search warrants, execute them, mostly hand over cases to prosecutors and give evidence in court. They may have practical competence in combating illegal activities, but they do not have legal competence to rule on legality or illegality (see below Legal competence v practical competence).

The system under which the Europol IRU sends takedown notices to platforms is illustrative. Thousands - in the case of the UK's similar Counter Terrorism Internet Referral Unit (CTIRU) hundreds of thousands - of items of content are taken down on the say-so of the police, with safeguards against overreaching dependent on the willingness and resources of the platforms to push back. 

It is impossible to know whether such systems are ‘working’ or not, since there is (and is meant to be) no public visibility and evaluation of what has been removed. 

As at July 2017 the CTIRU had removed some 270,000 items since 2010.  A recent freedom of information request by the Open Rights Group for a list of the kind of “statistical records, impact assessments and evaluations created and kept by the Counter Terrorism Internet Referrals Unit in relation to their operations” was rejected on grounds that it would compromise law enforcement by undermining the operational effectiveness of the CTIRU and have a negative effect on national security.

In the UK PIPCU (the Police Intellectual Property Crime Unit) is the specialist intellectual property enforcement unit of the City of London Police. One of PIPCU’s activities is to write letters to domain registrars asking them to suspend domains used for infringing activities. Its registrar campaign led to this reminder from a US arbitrator of the distinction between the police and the courts:
“To permit a registrar of record to withhold the transfer of a domain based on the suspicion of a law enforcement agency, without the intervention of a judicial body, opens the possibility for abuse by agencies far less reputable than the City of London Police. Presumably, the provision in the Transfer Policy requiring a court order is based on the reasonable assumption that the intervention of a court and judicial decree ensures that the restriction on the transfer of a domain name has some basis of “due process” associated with it.”
A law enforcement body not subject to independent due process (such as applying for a court order) is at risk of overreach, whether through over-enthusiasm for the cause of crime prevention, succumbing to groupthink or some other reason. Due process at source is designed to prevent that. Safeguards at the receiving end do not perform the same role of keeping official agencies in check.

The Communication suggests (Action Point 8, 3.2.1) that in ‘a limited number of cases’ platforms could remove content notified by certain trusted flaggers without verifying legality themselves. 

What might these 'limited cases' be?  Could it apply to both state and private trusted flaggers? Would it apply to any kind of content and any kind of illegality, or only to some? Would it apply only where automated systems are in place? Would it apply only where a court has authoritatively determined that the content is illegal? Would it apply only to repeat violations? The Communication does not tell us. Where it would apply, absence of due process at source takes on greater significance.

Would it perhaps cover the same ground as Action Point 15, under which fully automated deletion should be applied where "circumstances leave little doubt about the illegality of the material", for example (according to the Communication) when notified by law enforcement authorities?

When we join the dots of the various parts of the Communication the impression is of significant expectation that instead of considering the illegality of content notified by law enforcement, platforms may assume that it is illegal and automatically remove it.

The Communication contains little to ensure that trusted flaggers make good decisions. Most of the safeguards are post-notice and post-removal and consist of procedures to be implemented by the platforms. As to specific due process obligations on the notice giver, the Communication is silent.

The contrast between this and the Freedom of Expression Guidelines noted earlier is evident. The Guidelines emphasise prior due process. The Communication emphasises ex post facto remedial safeguards to be put in place by the platforms. Those are expected to compensate for absence of due process on the part of the authority giving notice in the first place.

Legal competence v practical competence

The Communication opens its section on notices from state authorities by referring to courts and competent authorities able to issue binding orders or administrative decisions requiring online platforms to remove or block illegal content. Such bodies, it may reasonably be assumed, would incorporate some element of due process in their decision-making prior to the issue of a legally binding order.

However we have seen that the Communication abandons that limitation, referring to ‘law enforcement and other competent authorities’. A ‘competent authority’ is evidently not limited to bodies embodying due process and legally competent to determine illegality.

It includes bodies such as the police, who are taken to have practical competence through familiarity with the subject matter. Thus in the Europol EU Internet Referral Unit “security experts assess and refer terrorist content to online platforms”.

It is notable that this section in the earlier leaked draft did not survive the final edit:
“In the EU, courts and national competent authorities, including law enforcement authorities, have the competence to establish the illegality of a given activity or information online.” (emphasis added)
Courts are legally competent to establish legality or illegality, but law enforcement bodies are not. 

In the final version the Commission retreats from the overt assertion that law enforcement authorities are competent to establish illegality:
“In the EU, courts and national competent authorities, including law enforcement authorities, are competent to prosecute crimes and impose criminal sanctions under due process relating to the illegality of a given activity or information online.”
However by rolling them up together this passage blurs the distinction between the police, prosecutors and the courts.

If the police are to be regarded as trusted flaggers, one can see why the Communication might want to treat them as competent to establish illegality.  But no amount of tinkering with the wording of the Communication, or adding vague references to due process, can disguise the fact that the police are not the courts.

Even if we take practical competence of trusted flaggers as a given, the Communication does not discuss the standard to which trusted flaggers should evaluate content. Would the threshold be clearly illegal, potentially illegal, arguably illegal, more likely than not to be illegal, or something else? The omission is striking when compared with the carefully crafted proposed standard for reinstatement: "reasonable grounds to consider that the notified activity or information is not illegal".

The elision of legal competence and practical competence is linked to the lack of insistence on prior due process.  In an unclear case a legally competent body such as a court can make a binding determination one way or the other that can be relied upon.  A trusted flagger cannot do so, however expert and standards compliant it may be.  The lower the evaluation threshold, the more the burden is shifted on to the platform to make an assessment which it is not competent legally, and unlikely to be competent practically, to do. Neither the trusted flagger nor the platform is a court or a substitute for a court.

Due process v quality standards

The Commission has an answer to absence of due process at source. It suggests that trusted flaggers could achieve a kind of quasi-official qualification:
“In order to ensure a high quality of notices and faster removal of illegal content, criteria based notably on respect for fundamental rights and of democratic values could be agreed by the industry at EU level.

This can be done through self-regulatory mechanisms or within the EU standardisation framework, under which a particular entity can be considered a trusted flagger, allowing for sufficient flexibility to take account of content-specific characteristics and the role of the trusted flagger.” (para 3.2.1)
The Communication suggests criteria such as internal training standards, process standards, quality assurance, and legal safeguards around independence, conflicts of interest, protection of privacy and personal data.  These would have sufficient flexibility to take account of content-specific characteristics and the role of the trusted flagger.  The Commission intends to explore, in particular in dialogues with the relevant stakeholders, the potential of agreeing EU-wide criteria for trusted flaggers.

The Communication's references to internal training standards, process standards and quality assurance could have been lifted from the manual for a food processing plant. But what we write online is not susceptible of precise measurement of size, temperature, colour and weight. With the possible exception of illegal images of children (but exactly what is illegal still varies from one country to another) even the clearest rules are fuzzy around the edges. For many speech laws, qualified with exceptions and defences, the lack of precision extends further. Some of the most controversial such as hate speech and terrorist material are inherently vague. Those are among the most highly emphasised targets of the Commission’s scheme.

A removal scheme cannot readily be founded on the assumption that legality of content can always (or even mostly) be determined like grading frozen peas, simply by inspecting the item - whether the inspector be a human being or a computer.

No amount of training – of computers or people - can turn a qualitative evaluation into a precise scientific measurement.  Even for a well-established takedown practice such as copyright, it is unclear how a computer could reliably identify exceptions such as parody or quotation when human beings themselves argue about their scope and application.

The appropriateness of removal based on a mechanical assessment of illegality is also put in question by the recent European Court of Human Rights decision in Tamiz v Google, a defamation case.  The court emphasised that a claimant’s Article 8 rights are engaged only where the material reaches a minimum threshold of seriousness. It may be disproportionate to remove trivial comments, even if they are technically unlawful.  A removal system that asks only “legal or illegal?” may not be posing all the right questions.


Manifest illegality v contextual information

The broader issue raised in the previous section is that knowledge of content (whether by the notice giver or the receiving platform) is not the same as knowledge of illegality, even if the question posed is the simple "legal or illegal?".  

As Eady J. said in Bunt v Tilley in relation to defamation: “In order to be able to characterise something as ‘‘unlawful’’ a person would need to know something of the strength or weakness of available defences". Caselaw under the ECommerce Directive contains examples of platforms held not to be fixed with knowledge of illegality until a considerable period after the first notification was made. The point applies more widely than defamation.

The Commission is aware that this is an issue:
“In practice, different content types require a different amount of contextual information to determine the legality of a given content item. For instance, while it is easier to determine the illegal nature of child sexual abuse material, the determination of the illegality of defamatory statements generally requires careful analysis of the context in which it was made.” (para 4.1)
However, to identify the problem is not to solve it.  Unsurprisingly, the Communication struggles to find a consistent approach. At one point it advocates adding humans into the loop of automated illegality detection and removal by platforms:
“This human-in-the-loop principle is, in general, an important element of automatic procedures that seek to determine the illegality of a given content, especially in areas where error rates are high or where contextualisation is necessary.” (para 3.3.2)
It suggests improving automatic re-upload filters, giving the examples of the existing ‘Database of Hashes’ in respect of terrorist material, child sexual abuse material and copyright. It says that could also apply to other material flagged by law enforcement authorities. But it also acknowledges their limitations:
“However, their effectiveness depends on further improvements to limit erroneous identification of content and to facilitate context-aware decisions, as well as the necessary reversibility safeguards.” (para 5.2)
In spite of that acknowledgment, for repeated infringement the Communication proposes that: “Automatic stay-down procedures should allow for context-related exceptions”. (para 5.2) 

Placing ‘stay-down’ obligations on platforms is in any event a controversial area, not least because the monitoring required is likely to conflict with Article 15 of the ECommerce Directive.

Inability to deal adequately with contextuality is no surprise. It raises serious questions of consistency with the right of freedom of expression. The SABAM/Scarlet and SABAM/Netlog cases in the CJEU made clear that filtering carrying a risk of wrongful blocking constitutes an interference with freedom of expression. It is by no means obvious, taken with the particular fundamental rights concerns raised by prior restraint, that that can be cured by putting "reversibility safeguards" in place.

Illegality on the face of the statute v prosecutorial discretion

Some criminal offences are so broadly drawn that, in the UK at least, prosecutorial discretion becomes a significant factor in mitigating potential overreach of the offence. In some cases prosecutorial guidelines have been published (such as the English Director of Public Prosecution’s Social Media Prosecution Guidelines). 

Take the case of terrorism.  The UK government argued before the Supreme Court, in a case about a conviction for dissemination of terrorist publications by uploading videos to YouTube (R v Gul [2013] UKSC 64, [30]), that terrorism was deliberately very widely defined in the statute, but prosecutorial discretion was vested in the DPP to mitigate the risk of criminalising activities that should not be prosecuted. The DPP is independent of the police.

The Supreme Court observed that this amounted to saying that the legislature had “in effect delegated to an appointee of the executive, albeit a respected and independent lawyer, the decision whether an activity should be treated as criminal for the purpose of prosecution”.

It observed that that risked undermining the rule of law since the DPP, although accountable to Parliament, did not make open, democratically accountable decisions in the same way as Parliament. Further, “such a device leaves citizens unclear as to whether or not their actions or projected actions are liable to be treated by the prosecution authorities as effectively innocent or criminal - in this case seriously criminal”. It described the definition of terrorism as “concerningly wide” ([38]).

Undesirable as this type of lawmaking may be, it exists. An institutional removal system founded on the assumption that content can identified in a binary way as ‘legal/not legal’ takes no account of broad definitions of criminality intended to be mitigated by prosecutorial discretion.

The existing formal takedown procedure under the UK’s Terrorism Act 2006, empowering a police constable to give a notice to an internet service provider, could give rise to much the same concern.  The Act requires that the notice should contain a declaration that, in the opinion of the constable giving it, the material is unlawfully terrorism-related.  There is no independent mitigation mechanism of the kind that applies to prosecutions.   

In fact the formal notice-giving procedure under the 2006 Act appears never to have been used, having been supplanted by the voluntary procedure involving the Counter Terrorism Internet Referrals Unit (CTIRU) already described. 

Offline v online

The Communication opens its second paragraph with the now familiar declamation that ‘What is illegal offline is also illegal online’. But if that is the desideratum, are the online protections for accused speech comparable with those offline? 

The presumption against prior restraint applies to traditional offline publishers who are indubitably responsible for their own content. Even if one holds the view that platforms ought to be responsible for users’ content as if it were their own, equivalence with offline does not lead to a presumption of illegality and an expectation that notified material will be removed immediately, or even at all.  

Filtering and takedown obligations introduce a level of prior restraint online that does not exist offline. They are exercised against individual online self-publishers and readers (you and me) via a side door: the platform. Sometimes this will occur prior to publication, always prior to an independent judicial determination of illegality.

The trusted flagger system would institutionalise banned lists maintained by police authorities and government agencies. Official lists of banned content may sound like tinfoil hat territory. But as already noted, platforms will be encouraged to operate automatic removal processes, effectively assuming the illegality of content notified by such sources (Action Points 8 and 15).  

The Commission cites the Europol IRU as a model. The Europol IRU appears not even to restrict itself to notifying illegal content.  In reply to an MEP’s question earlier this year the EU Home Affairs Commissioner said:
“The Commission does not have statistics on the proportion of illegal content referred by EU IRU. It should be noted that the baseline for referrals is its mandate, the EU legal framework on terrorist offences as well as the terms and conditions set by the companies. When the EU IRU scans for terrorist material it refers it to the company where it assesses it to have breached their terms and conditions.”
This blurring of the distinction between notification on grounds of illegality and notification for breach of platform terms and conditions is explored further below (Illegality v terms of service). 

A recent Policy Exchange paper The New Netwar makes another suggestion. An expert-curated data feed of jihadist content (it is unclear whether this would go further than illegal content) that is being shared should be provided to social media companies. The paper suggests that this would be overseen by the government’s proposed Commission for Countering Extremism, perhaps in liaison with GCHQ.

It may be said that until the internet came along individuals did not have the ability to write for the world, certainly not in the quantity that occurs today; and we need new tools to combat online threats.  

If the point is that the internet is in fact different from offline, then we should carefully examine those differences, consider where they may lead and evaluate the consequences. Simply reciting the mantra that what is illegal offline is illegal online does not suffice when removal mechanisms are proposed that would have been rejected out of hand in the offline world.

Some may look back fondly on a golden age in which no-one could write for the public other than via the moderating influence of an editor’s blue pencil and spike.   The internet has broken that mould. Each one of us can speak to the world. That is profoundly liberating, profoundly radical and, to some no doubt, profoundly disturbing. If the argument is that more speech demands more ways of controlling or suppressing speech, that would be better made overtly than behind the slogan of offline equivalence.

More is better, faster is best

Another theme that underlies the Communication is the focus on removal rates and the suggestion that more and faster removal is better.

But what is a ‘better’ removal rate? A high removal rate could indicate that trusted flaggers were notifying only content that is definitely unlawful.  Or it could mean that platforms were taking notices largely on trust.  Whilst ‘better’ removal rates might reflect better decisions, pressure to achieve higher and speedier removal rates may equally lead to worse decisions. 

There is a related problem with measuring speed of removal. From what point do you measure the time taken to remove? The obvious answer is, from when a notice is received. But although conveniently ascertainable, that is arbitrary.  As discussed above, knowledge of content is not the same as knowledge of illegality.

Not only is a platform not at risk of liability until it has knowledge of illegality, but if the overall objective of the Commission's scheme is removal of illegal content (and only of illegal content) then the platform positively ought not to take down material unless and until it knows for sure that the material is illegal.  While the matter remains in doubt the material should stay up.  

Any meaningful measure of removal speed should look at the period elapsed after knowledge was acquired, in addition to or instead of the period from first notification.  A compiler of statistics should look at the history of each removal (or non-removal) and make an independent evaluation of when (if at all) knowledge of the illegality was acquired – effectively repeating and second-guessing the platform’s own evaluation. That exercise becomes more challenging if platforms are taking notices on trust and removing without performing their own evaluation.

This is a problem created by the Commission’s desire to convert a liability shield (the ECommerce Directive) into a removal tool.

Liability shield v removal tool

At the heart of the Communication (and of the whole ‘Notice and Action’ narrative that the Commission has been promoting since 2010) is an attempt to reframe the hosting liability shield of the ECommerce Directive as a positive obligation on the host to take action when it becomes aware of unlawful content or behaviour, usually when it receives notice.

The ECommerce Directive incentivises, but does not obligate, a host to take action.  This is a nuanced but important distinction. If the host does not remove the content expeditiously after becoming aware of illegality then the consequence is that it loses the protection of the shield. But a decision not to remove content after receiving notice does not of itself give rise to any obligation to take down the content.  The host may (or may not) then become liable for the user’s unlawful activity (if indeed it be unlawful). That is a question of the reach of each Member State’s underlying law, which may vary from one kind of liability to another and from one Member State to another.

This structure goes some way towards respecting due process and the presumption against prior restraint (see above). Even if the more realistic scenario is that a host will be over-cautious in its decisions, a host remains free to decide to leave the material up and bear the risk of liability after a trial or the risk of an interim court order requiring it to be removed. It has the opportunity to 'publish and be damned'.

The Communication contends that "at EU level the general legal framework for illegal content removal is the ECommerce Directive".  The Communication finds support for its ‘removal framework’ view in Recital (40) of the Directive:
“this Directive should constitute the appropriate basis for the development of rapid and reliable procedures for removing and disabling access to illegal information"
The Recital goes on to envisage voluntary agreements encouraged by Member States.

However the Directive is not in substance a removal framework.  It is up to the notice-giver to provide sufficient detail to fix the platform with knowledge of the illegality.  A platform has no obligation to enquire further in order to make a fully informed Yes/No decision. It can legitimately decline to take action on the basis of insufficient information.  

The Communication represents a subtle but significant shift from intermediary liability rules as protection from liability to a regime in which platforms are positively expected to act as arbiters, making fully informed decisions on legality and removal (at least when they are not being expected to remove content on trust).

Thus Action Point 17 suggests that platforms could benefit from submitting cases of doubt to a third party to obtain advice. The Communication suggests that this could apply especially where platforms find difficulty in assessing the legality of a particular content item and it concerns a potentially contentious decision. That, one might think, is an archetypal situation in which the platform can (and, according to the objectives of the Communication, should) keep the content up, protected by the fact that illegality is not apparent.

The Commission goes on to say that ‘Self-regulatory bodies or competent authorities play this role in different Member States’ and that ‘As part of the reinforced co-operation between online platforms and competent authorities, such co-operation is strongly encouraged.” This is difficult to understand, since the only competent authorities referred to in the Communication are trusted flaggers who give notice in the first place. 

The best that can be said about consistency with the ECommerce Directive is that the Commission is taking on the role of encouraging voluntary agreements envisaged for Member States in Recital (40). Nevertheless an interpretation of Recital (40) that encourages removal without prior due process could be problematic from the perspective of fundamental rights and the presumption against prior restraint.

The Communication’s characterisation of the Directive as the general removal framework creates another problem. Member States are free to give greater protection to hosts under their national legislation than under the Directive.  So, for instance, our Defamation Act 2013 provides website operators with complete immunity from defamation liability for content posted by identifiable users.  Whilst a website operator that receives notice and does not remove the content may lose the umbrella protection of the Directive, it will still be protected from liability under English defamation law. There is no expectation that a website operator in that position should take action after receiving notice.

This shield, specific to defamation, was introduced in the 2013 Act because of concerns that hosts were being put in the position of being forced to choose between defending material as though they had written it themselves (which they were ill-placed to do) or taking material down immediately.

In seeking to construct a quasi-voluntary removal framework in which the expectation is that content will be removed on receipt of sufficient notice, the Communication ignores the possibility that under a Member State's own national law the host may not be at risk at all for a particular kind of liability and there is no expectation that it should remove notified content.

A social media website would be entirely justified under the Defamation Act in ignoring a notice regarding an allegedly defamatory post by an identifiable author.  Yet the Communication expects its notice and action procedures to cover defamation. Examples like this put the Communication on a potential collision course with Member States' national laws.

This is a subset of a bigger problem with the Communication, namely its failure to address the issues raised by differences in substantive content laws as between Member States.

National laws v coherent EU strategy

There is deep tension, which the Communication makes no attempt to resolve, between the plan to create a coherent EU-wide voluntary removal process and the significant differences between the substantive content laws of EU Member States.

In many areas (even in harmonised fields such as copyright) EU Member States have substantively different laws about what is and isn't illegal. The Communication seeks to put in place EU-wide removal procedures, but makes no attempt to address which Member State's law is to be applied, in what circumstances, or to what extent. This is a fundamental flaw.

If a platform receives (say) a hate speech notification from an authority in Member State X, by which Member State's law is it supposed to address the illegality?

If Member State X has especially restrictive laws, then removal may well deprive EU citizens in other Member States of content that is perfectly legal in their countries.

If the answer is to put in place geo-blocking, that is mentioned nowhere in the Communication and would hardly be consistent with the direction of other Commission initiatives.

If the content originated from a service provider in a different Member State from Member State X, a takedown request by the authorities in Member State X could well violate the internal market provisions of the ECommerce Directive.

None of this is addressed in the Communication, beyond the bare acknowledgment that legality of content is governed by individual Member States' laws. 

This issue is all the more pertinent since the CJEU has specifically referred to it in the context of filtering obligations. In SABAM/Netlog (a copyright case) the Court said:
"Moreover, that injunction could potentially undermine freedom of information, since that system might not distinguish adequately between unlawful content and lawful content, with the result that its introduction could lead to the blocking of lawful communications. 
Indeed, it is not contested that the reply to the question whether a transmission is lawful also depends on the application of statutory exceptions to copyright which vary from one Member State to another.
In addition, in some Member States certain works fall within the public domain or may be posted online free of charge by the authors concerned."
A scheme intended to operate in a coherent way across the European Union cannot reasonably ignore the impact of differences in Member State laws. Nor is it apparent how post-removal procedures could assist in resolving this issue.

The question of differences between Member States' laws also arises in the context of reporting criminal offences. 

The Communication states that platforms should report to law enforcement authorities whenever they are made aware of or encounter evidence of criminal or other offences (Action Point 18).

According to the Communication this could range from abuse of services by organised criminal or terrorist groups (citing Europol’s SIRIUS counter terrorism portal) through to offers and sales of products and commercial practices that are not compliant with EU legislation.

This requirement creates more issues with differences between EU Member State laws. Some activities (defamation is an example) are civil in some countries and criminal in others. Putting aside the question whether the substantive scope of defamation law is the same across the EU, is a platform expected to report defamatory content to the police in a Member State in which defamation is criminal and where the post can be read, when in another Member State (perhaps the home Member State of the author) it would attract at most civil liability? The Communication is silent on the question.

Illegality v terms of service

The Commission says (Action Point 21) that platform transparency should reflect both the treatment of illegal content and content which does not respect the platform’s terms of service. [4.2.1]

The merits of transparency aside, it is unclear why the Communication has ventured into the separate territory of platform content policies that do not relate to illegal content. The Communication states earlier that although there are public interest concerns around content which is not necessarily illegal but potentially harmful, such as fake news or content harmful for minors, the focus of the Communication is on the detection and removal of illegal content. 

There has to be a suspicion that platforms’ terms of service would end up taking the place of illegality as the criterion for takedown, thus avoiding the issues of different laws in different Member States and, given the potential breadth of terms of service, almost certainly resulting in over-removal compared with removal by reference to illegality. The apparent practice of the EU IRU in relying on terms of service as well as illegality has already been noted.

This effect would be magnified if pressure were brought to bear on platforms to make their terms of service more restrictive. We can see potential for this in the Policy Exchange publication The New Netwar:
“Step 1: Ask the companies to revise and implement more stringent Codes of Conduct/Terms of Service that explicitly reject extremism.

At present, the different tech companies require users to abide by ‘codes of conduct’ of varying levels of stringency. … This is a useful start-point, but it is clear that they need to go further now in extending the definition of what constitutes unacceptable content. … 
… it is clear that there need to be revised, more robust terms of service, which set an industry-wide, robust set of benchmarks. The companies must be pressed to act as a corporate body to recognise their ‘responsibility’ to prevent extremism as an integral feature of a new code of conduct. … The major companies must then be proactive in implementing the new terms of trade. In so doing, they could help effect a sea-change in behaviour, and help to define industry best practice.”

In this scheme of things platforms’ codes of conduct and terms of service become tools of government policy rather than a reflection of each platform’s own culture or protection for the platform in the event of a decision to remove a user’s content.



Annex - The Communication’s 30 Action Points


State authorities

1.       Online platforms should be able to make swift decisions as regards possible actions with respect to illegal content online without being required to do so on the basis of a court order or administrative decision, especially where a law enforcement authority identifies and informs them of allegedly illegal content. [3.1]

2.      At the same time, online platforms should put in place adequate safeguards when giving effect to their responsibilities in this regard, in order to guarantee users’ right of effective remedy. [3.1]

3.      Online platforms should therefore have the necessary resources to understand the legal frameworks in which they operate.  [3.1]

4.      They should ensure that they can be rapidly and effectively contacted for requests to remove illegal content expeditiously and also in order to, where appropriate, alert law enforcement to signs of online criminal activity. [3.1]

5.      Law enforcement and other competent authorities should co-operate with another to define effective digital interfaces for fast and reliable submission of notification and to ensure efficient identification and reporting of illegal content. [3.1]

Notices from trusted flaggers

6.      Online platforms are encouraged to make use of existing networks of trusted flaggers. [3.2.1]

7.       Criteria for an entity to be considered a trusted flagger based on fundamental rights and democratic values could be agreed by the industry at EU level, through self-regulatory mechanisms or within the EU standardisation framework. [3.2.1]

8.      In a limited number of cases platforms may remove notified content without further verifying the legality of the content themselves. For these cases trusted flaggers could be subject to audit and a certification scheme.  [3.2.1] 

9.      Where there are abuses of trusted flagger mechanisms against established standards, the privilege of a trusted flagger status should be removed. [3.2.1]

Notices by ordinary users

10.   Online platforms should establish an easily an easily accessible and user-friendly mechanism allowing users to notify hosted content considered to be illegal. [3.2.2]

Quality of notices

11.     Online platforms should put in place effective mechanisms to facilitate the submission of notices that are sufficiently precise and adequately substantiated. [3.2.3]

12.   Users should not normally be obliged to identify themselves when giving a notice unless it is required to determine the legality of the content. They should be encouraged to use trusted flaggers, where they exist, if they wish to maintain anonymity.   Notice providers should have the opportunity voluntarily to submit contact details. [3.2.3]

Proactive measures by platforms

13.   Online platforms should adopt effective proactive measures to detect and remove illegal content online and not only limit themselves to reacting to notices. [3.3.1]
14.   The Commission strongly encourages online platforms to use voluntary, proactive measures aimed at the detection and removal of illegal content and to step up cooperation and investment in, and use of, automatic detection technologies. [3.3.2]

Removing illegal content

15.   Fully automated deletion or suspension of content should be applied where the circumstances leave little doubt about the illegality of the material. [4.1]

16.   As a general rule removal in response to trusted flagger notices should be addressed more quickly. [4.1]

17.   Platforms could benefit from submitting cases of doubt to a third party to obtain advice. [4.1]

18.   Platforms should report to law enforcement authorities whenever they are made aware of or encounter evidence of criminal or other offences. [4.1]

Transparency

19.   Online platforms should disclose their detailed content policies in their terms of service and clearly communicate this to their users. [4.2.1]

20.  The terms should not only define the policy for removing or disabling content, but also spell out the safeguards that ensure that content-related measures do not lead to over-removal, such as contesting removal decisions, including those triggered by trusted flaggers. [4.2.1]

21.   This should reflect both the treatment of illegal content and content which does not respect the platform’s terms of service. [4.2.1]

22.  Platforms should publish at least annual transparency reports with information on the number and type of notices received and actions taken; time taken for processing and source of notification; counternotices and responses to them. [4.2.2]

Safeguards against over-removal

23.  In general those who provided the content should be given the opportunity to contest the decision via a counternotice, including when content removal has been automated. [4.3.1]

24.  If the counter-notice has provided reasonable grounds to consider that the notified activity or information is not illegal, the platform should restore the content that was removed without undue delay or allow re-upload by the user. [4.3.1]

25.  However in some circumstances allowing for a counternotice would not be appropriate, in particular where this would interfere with investigative powers of authorities necessary for prevention, detection and prosecution of criminal offences. [4.3.1]

Bad faith notices and counter-notices

26.  Abuse of notice and action procedures should be strongly discouraged.
a.       For instance by de-prioritising notices from a provider who sends a high rate of invalid notices or receives a high rate of counter-notices. [4.3.2]
b.      Or by revoking trusted flagger status according to well-established and transparent criteria. [4.3.2]

Measures against repeat infringers

27.  Online platforms should take measures which dissuade users from repeatedly uploading illegal content of the same nature. [5.1]

28.  They should aim to effectively disrupt the dissemination of such illegal content. Such measures would include account suspension or termination. [5.1]

29.  Platforms are strongly encouraged to use fingerprinting tools to filter out content that has already been identified and assessed as illegal. [5.2]

30.  Online platforms should continuously update their tools to ensure all illegal content is captured. Technological development should be carried out in co-operation with online platforms, competent authorities and other stakeholders including civil society. [5.2]