One of the more intriguing aspects of the draft Online Safety Bill is the government’s insistence that the safety duties under the draft Bill are not about individual items of content, but about having appropriate systems and processes in place; and that this is protective of freedom of expression.
Thus in written evidence to the Joint Parliamentary
Committee scrutinising the draft Bill the DCMS said:
“The regulatory framework set out in the draft Bill is
entirely centred on systems and processes, rather than individual pieces of
content, putting these at the heart of companies' responsibilities.
…
The focus on robust processes and systems rather than
individual pieces of content has a number of key advantages. The scale of
online content and the pace at which new user-generated content is uploaded
means that a focus on content would be likely to place a disproportionate
burden on companies, and lead to a greater risk of over-removal as companies seek
to comply with their duties. This could put freedom of expression at risk, as
companies would be incentivised to remove marginal content. The focus on
processes and systems protects freedom of expression, and additionally means
that the Bill’s framework will remain effective as new harms emerge.
The regulator will be focused on oversight of the
effectiveness of companies’ systems and processes, including their content
moderation processes. The regulator will not make decisions on individual
pieces of content, and will not penalise companies where their moderation
processes are generally good, but inevitably not perfect.”
The government appears to be arguing that since a service
provider would not automatically be sanctioned for a single erroneous removal
decision, it would tend to err on the side of leaving marginal content up. Why such an incentive would operate
only in the direction of under-removal, when the same logic would apply to
individual decisions in either direction, is unclear.
Be that as it may, elsewhere the draft Bill hardwires a
bias towards over-removal into the illegal content safety duty: by setting the
threshold at which the duty bites at ‘reasonable grounds to believe’ that the
content is illegal, rather than actual illegality or even likelihood of
illegality.
The government’s broader claim is that centreing duties on
systems and processes results in a regulatory regime that is not focused on individual
pieces of content at all. This claim merits close scrutiny.
Safety duties, in terms of the steps required to fulfil them, can be of three kinds:
- Non-content. Duties with no direct effect on content at all, such as a duty to provide users with a reporting mechanism.
- Content-agnostic. This is a duty that is independent of the kind of content involved, but nevertheless affects users’ content. By its nature a duty that is unrelated to (say) the illegality or harmfulness of content will tend to result in steps being taken (‘friction’ devices, for instance, or limits on reach) that would affect unobjectionable or positively beneficial content just as they affect illegal or legal but harmful content.
- Content-related. These duties are framed specifically by reference to certain kinds of content: in the draft Bill, illegal, harmful to children and harmful to adults. Duties of this kind aim to affect those kinds of content in various ways, but carry a risk of collateral damage to other content.
In principle a content-related duty could encompass harm
caused either by the informational content itself, or by the manner in which a
message is conveyed. Messages with no informational content at all can cause
harm: repeated silent telephone calls can instil fear or, at least, constitute
a nuisance; flashing lights can provoke an epileptic seizure.
The government’s emphasis on systems and processes to some
extent echoes calls for a ‘systemic’ duty of care. To quote the Carnegie UK
Trust’s evidence to the Joint Scrutiny Committee, arguing for a more systemic
approach:
“To achieve the benefits of a systems and processes driven approach the Government should revert to an overarching general duty of care where risk assessment focuses on the hazards caused by the operation of the platform rather than on types of content as a proxy for harm.”
A systemic duty would certainly include the first two categories of duty: non-content and content-agnostic. It seems inevitable that a systemic duty would also encompass content-related duties. While steps taken pursuant to a duty may range more broadly than a binary yes/no content removal decision, that does not detract from the inevitable need to decide what (if any) steps to take according to the kind of content involved.
Indeed it is notable how rapidly discussion of a systemic
duty of care tends to move on to categories of harmful content, such as hate
speech and harassment. Carnegie’s evidence, while criticising the draft Bill’s
duties for focusing too much on categories of content, simultaneously censures
it for not spelling out for the ‘content harmful to adults’ duty how “huge
volumes of misogyny, racism, antisemitism etc – that are not criminal but are
oppressive and harmful – will be addressed”.
Even a wholly systemic duty of care has, at some level and
at some point – unless everything done pursuant to the duty is to apply
indiscriminately to all kinds of content - to become focused on which kinds of
user content are and are not considered to be harmful by reason of their
informational content, and to what degree.
To take one example, Carnegie discusses repeat
delivery of self-harm content due to personalisation systems. If repeat
delivery per se constitutes the risky activity, then inhibition of that
activity should be applied in the same way to all kinds of content. If repeat
delivery is to be inhibited only, or differently, for particular kinds of
content, then the duty additionally becomes focused on categories of content. There is no
escape from this dichotomy.
It is possible to conceive of a systemic safety duty expressed
in such general terms that it would sweep up anything in the system that might
be considered capable of causing harm (albeit - unless limited to risk of
physical injury - it would still inevitably struggle, as does the draft Bill, with
the subjective nature of harms said to be caused by informational content). A
systemic duty would relate to systems and processes that for whatever reason
are to be treated as intrinsically risky.
The question that then arises is what activities are to be
regarded as inherently risky. It is one thing to argue that, for instance, some algorithmic systems
may create risks of various kinds. It is quite another to suggest that that is true of any kind
of U2U platform, even a simple discussion forum. If the underlying assumption of
a systemic duty of care is that providing a facility in which individuals can
speak to the world is an inherently risky activity, that (it might be thought)
upends the presumption in favour of speech embodied in the fundamental right of
freedom of expression.
The draft Bill – content-related or not?
To what extent are the draft Bill’s duties content-related,
and to what extent systemic?
Most of the draft Bill’s duties are explicitly content-related. They mean to cover online user content that is illegal or harmful to adults or children. To the extent that, for instance, the effect of algorithms on the likelihood of encountering content has to be considered, that is in relation to those kinds of content.
For content-related duties the draft Bill draws no obvious distinction
between informational and non-informational causes of harm. So risk of physical
injury as a result of reading anti-vax content is treated indistinguishably
from risk of an epileptic seizure as a result of seeing flashing images.
The most likely candidates in the draft Bill for content-agnostic or non-content duties are Sections 9(2) and 10(2)(a). For illegal content S.9(2) requires the service provider to “take proportionate steps to mitigate and effectively manage the risks of harm to individuals”, identified in the service provider’s most recent S.7(8) illegal content risk assessment. S.10(2) contains a similar duty in relation to harm to children in different age groups, based on the most recent S.7(9) children’s risk assessment.
Although the S.7 risk assessments are about illegal content
and content harmful to children, neither of the 9(2) and 10(2)(a) safety duties
is expressly limited to harm arising from those kinds of content.
Possibly, those duties are intended to relate back to
Sections 7(8)(e) and 7(9)(e) respectively. Those require risk assessments of the
“different ways in which the service is used, and the impact that has on the
level of risk of harm that might be suffered” by individuals or children
respectively – again without expressly referring to the kinds of content that constitute
the subject-matter of Sections 7(8) and 7(9).
However, to deduce a pair of wholly content-agnostic duties in Sections
9(2) and 10(2)(a) would seem to require those S.7 risk assessment factors to be considered independently of their respective contexts.
Whatever may be the scope of S.9(2) and 10(2)(a), the vast majority
of the draft Bill’s safety duties are drafted expressly by reference to
in-scope illegal or legal but harmful content. Thus, for example, the
government notes at para [34] of its evidence:
“User-to-user services will be required to operate their
services using proportionate systems and processes to minimise the presence,
duration and spread of illegal content and to remove it swiftly once
they are aware of it.” (emphasis added)
As would be expected, those required systems and processes
are framed by reference to a particular type of user content. The same is true
for duties that apply to legal content defined as harmful to adults or
children.
The Impact Assessment accompanying the draft Bill states:
“…it is expected that undertaking additional content
moderation (through hiring additional content moderators or using automated
moderation) will represent the largest compliance cost faced by in-scope
businesses.” (Impact Assessment [166])
That compliance cost is estimated at £1.7 billion over 10
years. That does not suggest a regime that is not focused on content.
Individual user content
The contrast drawn by the government is between systems and
processes on the one hand, and “individual” pieces of content on the other.
The draft Bill defines harm as physical or psychological harm. What could result in such harm? The answer, online, can only be individual user content: that which, whether alone or in combination, singly or repeated, we say and see online. Various factors may influence, to differing extents, what results in which user content being seen by whom: user choices such as joining discussion forums and channels, choosing topics, following each other, rating each other’s posts and so on, or platform-operated recommendation and promotion feeds. But none of that detracts from the fact that it is what is posted – items of user content – that results in any impact.
The decisions that service providers would have to make –
whether automated, manual or a combination of both – when attempting to
implement content-related safety duties, inevitably concern individual items of
user content. The fact that those decisions may be taken at scale, or are the
result of implementing systems and processes, does not change that.
For every item of user content putatively subject to a
filtering, take-down or other kind of decision, the question for a service
provider seeking to discharge its safety duties is always what (if anything)
should be done with this item of content in this context? That is
true regardless of whether those decisions are taken for one item of content, a
thousand, or a million; and regardless of whether, when considering a service
provider’s regulatory compliance, Ofcom is focused on evaluating the adequacy
of its systems and processes rather than with punishing service providers for
individual content decision failures.
A platform duty of care has been likened to an obligation to
prevent risk of injury from a protruding nail in a floorboard. The analogy is flawed, but even taking that analogy at
face value the draft Bill casts service providers in the role of hammer, not
nail. The dangerous nail is users’ speech. Service providers are the tool
chosen to hammer it into place. Ofcom directs the use of the tool. Whether an
individual strike of the hammer may or may not attract regulatory sanction is a
matter of little consequence to the nail.
Even if Ofcom would not be involved in making individual
content decisions, it is difficult to see how it could avoid at some point
evaluating individual items of content. Thus the provisions for use of technology
notices require the “prevalence” of CSEA and/or terrorism content to be
assessed before serving a notice. That inevitably requires Ofcom to assess whether
material present on the service does or does not fall within those defined
categories of illegality.
More broadly, it is difficult to see how Ofcom could
evaluate for compliance purposes the proportionality and effectiveness of
filtering, monitoring, takedown and other systems and processes without
considering whether the user content affected does or does not qualify as
illegal or harmful content. That would again require a concrete assessment of
at least some actual items of user content.
No comments:
Post a Comment
Note: only a member of this blog may post a comment.