Shortly after the Online Safety Act (OSA) gained Royal Assent in October 2023, Ofcom issued a 1728 page consultation on Illegal Harms. This was the first step in Ofcom's lengthy journey towards implementing and giving concrete substance to the various duties that the Act will place on user-to-user (U2U) service providers and search engines.
The output of the Illegal Harms process (one of several consultations that Ofcom has to undertake) will be Codes of Practice, accompanied by Guidance documents on specific topics mandated by the Act, plus a Register of Risks. Ofcom anticipates the final versions of the Codes coming into force around the end of 2024 or the beginning of 2025.
The weight of the Illegal Harms consultation confounded even those of us who have argued that the OSA’s design is misconceived and would inevitably result in a legal and regulatory quagmire. Then, just a week before the Ofcom consultation closed in February 2024, the Information Commissioner's Office added its own contribution: 47 pages of guidance on how data protection law applies to online content moderation processes, including moderation carried out to comply with duties under the OSA. In June 2024 the ICO invited feedback on that.
The significance of Ofcom’s Codes of Practice is that a service provider is deemed to comply with the Act’s safety duties if it implements the recommendations of a Code of Practice. Ofcom’s Guidance documents are intended to assist service providers in implementing the Codes.
The consultation documents are not for the faint-hearted. The draft Guidance on Illegal Content Judgements, for instance, runs to 390 pages (perhaps unsurprisingly when, as Ofcom notes, the Act lists over 130 priority offences to which the safety duties apply). Parliamentarians may have assumed that determining whether user content is illegal is a simple matter for a service provider. The Guidance, unsurprisingly, reveals otherwise.
Some will think that the Ofcom consultation over-emphasises content moderation at the expense of ‘safety by design’ measures, which would not necessarily depend on distinguishing between legal and illegal user content.
Indeed, Ofcom itself has previously downplayed the content moderation aspects of the regime. In March last year the head of Ofcom, Melanie Dawes, told POLITICO that the then Online Safety Bill was:
"not
really a regime about content. It's about systems and processes. It's about the
design of the (social media) service that does include things like the
recommender algorithms and how they work"
When the Bill gained Royal Assent in October 2023, she told the BBC that:
"Ofcom is
not a censor, and our new powers are not about taking content down. Our job is
to tackle the root causes of harm."
A few weeks later the Illegal Harms consultation stated that the "main" duties relating to illegal content are for services:
"to assess the risk of harm
arising from illegal content ... or activity on their service, and take
proportionate steps to manage and mitigate those risks." (Volume 1,
Introduction.)
The introductory Section 1 of the Act, added in the Bill’s later stages, similarly emphasises risk of harm:
"[T]his
Act (among other things) ... imposes duties which, in broad terms, require providers
of services regulated by this Act to identify, mitigate and manage the risks of
harm (including risks which particularly affect individuals with a certain characteristic)
from illegal content and activity..."
That introductory section, however, is not – and does not purport to be – a complete description of the safety duties. The Act also imposes content-based duties: duties that are expressly framed in terms of, for instance, removing illegal content. Those sit alongside the duties that are expressed in other ways, such as managing and mitigating harm (defined in the Act as “physical or psychological harm”).
The Bill's Impact Assessment estimated that the largest proportion of compliance costs (£1.9 billion over 10 years) would be incurred in increased content moderation. It is not a surprise that content moderation features strongly in the consultation.
The overall impression given by the consultation is that Ofcom is conscious of the challenges presented by requiring service providers to adjudge the illegality of user content as a preliminary to blocking or removal, all the more so if they must also act proactively to detect it. Ofcom has made few recommendations for automated proactive detection and blocking of user content. However, it may be about to dip its toes further into those perilous waters:
“…we are
planning an additional consultation later this year on how automated tools,
including AI, can be used to proactively detect illegal content and content
most harmful to children – including previously undetected child sexual abuse
material.” (A window into young children’s online worlds, 19 April 2024)
Ofcom also appears to have sought, within the constraints of the Act, to minimise the compliance burden on small and medium businesses, individual service providers and non-commercial entities. Politicians may have convinced themselves that the legislation is all about big US social media companies and their algorithms. Ofcom has drawn the short straw of implementing the Act as it actually is: covering (according to the Impact Assessment) an estimated 25,000 UK businesses alone, 80% of which are micro-businesses (fewer than 10 employees).
That much by way of general introduction. After several attempts ended up bogged down in the quagmire, I have finally composed this short series of reflections on selected aspects of the Illegal Harms consultation and what it reveals about the challenges of giving concrete form to the Act.
Part 2 of the series takes a more detailed look at how the harm-based aspects of the service provider duties have played out in the consultation.
Part 3 categorises the U2U service provider duties (content-based, non-content-based and harm-based) and analyses the ‘systems and processes’ versus ‘content’ dichotomy.
Part 4 looks at Proactive Illegality Duties, Safeguards and Proportionality.
Part 5 is about fitting the Illegal Harms Consultation to the Act, taking a look at some perplexing definitions.
Part 6 discusses the interaction between the Act’s illegality duties and data protection.
Not part of the series, but this is my analysis of how the Act's illegality duties might be applied to an online post of Marieha Hussain's 'coconuts' protest placard.
No comments:
Post a Comment
Note: only a member of this blog may post a comment.