Friday 6 January 2023

Twenty questions about the Online Safety Bill

Before Christmas Culture Secretary Michelle Donelan invited members of the public to submit questions about the Online Safety Bill, which she will sit down to answer in the New Year. 

Here are mine. 

1. A volunteer who sets up and operates a Mastodon instance in their spare time appears to be the provider of a user-to-user service. Is that correct?

2. Alice runs a personal blog on a blogging platform and is able to decide which third party comments on her blogposts to accept or reject. Is Alice (subject to any Schedule 1 exemptions) the provider of a user-to-user service in relation to those third party comments?

3. Bob runs a blog on a blogging platform. He has multiple contributors, whom he selects. Is Bob the provider of a user-to-user service in relation to their contributions?

4. Is a collaborative software development platform the provider of a user-to-user service?

5. The exclusion from “regulated user-generated content” extends to comments on comments (Clause 49(6)). But a facility enabling free form ‘comments on comments’ appears to disapply the Sch 1 para 4 limited functionality user-to-user service exemption. Is that correct? If so, what is the rationale for the difference? Would, for example, a newspaper website with functionality that enabled free form ‘comments on comments’ therefore not enjoy exclusion from scope under Sch 1 para 4?

6. Does the Sch 1 para 4 limited functionality exemption apply to goods retailers’ own-product review sections? If so, does it achieve that when it refers only to content and not to the goods themselves?

7. Would a site that enables academics to upload papers, subject to prior review by the site operator, be a user-to-user service? 

8. Cl 204(2)(e) appears to suggest that a multiplayer online game would be a user-to-user service by virtue of player interaction alone, whether or not there is an inter-player chat or similar facility. Is that right?

9. Carol sets up and operates a voluntary online neighbourhood watch forum for her locality. Would Carol be a provider of a user-to-user service? 

10. Dan operates a blockchain node. Would Dan be a provider of a user-to-user service?

11. Grace chairs a public meeting using a video platform. Grace has control over who can join the meeting. Would Grace be a provider of a user-to-user service in relation to that meeting?

12. The threshold that the Bill requires a platform to apply when determining criminal illegality is ‘reasonable grounds to infer’. The criminal standard of proof is ‘beyond reasonable doubt’. Would not the Bill’s lower threshold inevitably require removal (at least for proactive obligations) of content that is in fact legal? For automated real time systems would that not occur at scale?

13. The Bill requires a platform to adjudge illegality on the basis of all relevant information reasonably available to it. Particularly for proactive automated processes, that will be limited to what users have posted to the platform. Yet often, illegality depends crucially on extrinsic contextual information that is not available to the platform. How could the adjudgment required by the Bill thus not be arbitrary?

14. For many offences the question of illegality is likely to revolve mainly around intent and available defences. The Bill requires platforms to assess illegality on the basis that the possibility of a defence is to be taken into account only if the platform has reasonable grounds to infer that a defence may successfully be relied upon. Yet the information from which the possibility of a defence (such as reasonable excuse) might be inferred will very often be extrinsic context that, especially for proactive obligations, is not available to a platform. Would that not inevitably require removal of content that is in fact legal? For automated real time systems would that not occur at scale?

15. The Bill requires platforms to have particular regard to the importance of protecting users’ right to freedom of expression ‘within the law’. Does that modify the express requirements of Clause 170 as to how a platform should assess illegality? If so, how?

16. The government’s European Convention on Human Rights Memorandum contains no discussion of the Bill’s illegality duties as a form of prior restraint. Nor does it address the human rights implications of the ‘reasonable grounds to infer’ clause, which was introduced later. Will the government issue a revised Memorandum?

17. Is it intended that the risks of harm to individuals to be mitigated and managed under Clause 9(2)(c) should be limited to those arising from illegality identified in the illegality risk assessment? If so, how does the Bill achieve that?

18. The Bill contains powers to require private messaging services to use accredited technology to identify CSEA content. It also contains an obligation to report all new detected material to the National Crime Agency. The Explanatory Notes state that services will be required to report all and any available information relating to instances of CSEA, including any that help identify a perpetrator or victim. 

The White Paper noted that “Many children and young people take and share sexual images. Creating, possessing, copying or distributing sexual or indecent images of children and young people under the age of 18 is illegal, including those taken and shared by the subject of the image.” Does this mean that an under-18 consensually taking and sharing an indecent selfie on a private messaging platform would automatically be reported to the National Crime Agency if the image is detected by the platform?

19. What are the estimated familiarisation and compliance costs for an in-scope small business or voluntary user-to-user service? What is the calculation of the estimated costs? 

20. The Law Commission in 2018 stated that the common law public nuisance offence applied to online communications. The statutory replacement in s.78 of the Police, Crime, Sentencing and Courts Act 2022 does so too. Could a platform’s reactive duty under Cl. 9, combined with Cl. 170, require it to determine whether it has reasonable grounds to infer that a user’s post creates a risk of causing serious annoyance to a section of the public?



No comments:

Post a Comment

Note: only a member of this blog may post a comment.