This is sixth and final instalment in a series of reflections on Ofcom’s Illegal Harms consultation under the Online Safety Act 2023. Ofcom is due to publish the final version of its Illegal Harms Codes of Practice and Guidance in December.
The interaction between data protection law and the Online
Safety Act’s illegal content duties attracted almost no attention during the
passage of the Bill through Parliament. Nor does data protection garner more
than a bare mention in the body of the Act itself. Nevertheless, service
providers will have to perform their obligations compatibly with data
protection laws.
However, data protection law does not sit entirely neatly
alongside the OSA. It overlaps and potentially collides with some of the
substantive measures that the Act requires service providers to take. This
creates tensions between the two regimes.
As the process of implementing the OSA’s service
provider duties has got under way, more attention has been directed to how the
two regimes fit together.
At the most general level, while the Bill was still under
discussion, on 25 November 2022 the ICO and Ofcom published a Joint Statement on online safety and data protection. This was an aspirational document,
setting out shared goals of maximising coherence between the data protection
and online safety regimes and working together to promote compliance with them.
It envisaged a renewed formal memorandum of understanding between the ICO and
Ofcom (yet to appear). A more detailed
Joint Statement on collaboration between the two regulators was issued on 1 May
2024.
The November 2022 statement recognised that:
“there are sometimes tensions
between safety and privacy. For example, to protect users’ safety online,
services might need to collect more information about their users, the content
they view and their behaviour online. To protect users’ privacy, services can
and should limit this data collection to what is proportionate and necessary.”
It went on:
“Where there are tensions between
privacy and safety objectives, we will provide clarity on how compliance can be
achieved with both regimes.”
On 16 February 2024, a week before the end of Ofcom’s
Illegal Harms consultation period, the Information Commissioner’s Office
published 47 pages of guidance on how data protection law applies to online
content moderation processes - including moderation carried out to comply with
duties under the Online Safety Act. It avowed an aim to support organisations
carrying out content moderation in scope of the Act. Four months later, the ICO
invited feedback on its Guidance.
Ofcom’s Illegal Harms consultation is itself liberally
garnished with warnings that data protection law must be complied with, but
less generously endowed with concrete guidance on exactly how to do so.
The ICO Guidance, although it put some flesh on the bones,
was still pitched at a relatively high level. That was a deliberate decision.
The accompanying Impact Assessment records that the ICO considered, but
rejected, the option of:
“more extensive guidance
discussing in depth how data protection law applies when developing or using
content moderation”
in favour of:
“High level guidance setting out
the ICO’s preliminary data protection and privacy expectations for online
content moderation, and providing practical examples, with plans for further
work as the policy area develops”.
The reason for this decision was that:
“it provides some degree of
clarity for a wide variety of stakeholders, whilst still allowing the necessary
flexibility for our policy positions to develop during the early stages of
Ofcom’s policy and guidance development”.
The next, and perhaps most interesting, document was the
ICO’s own submission to Ofcom’s Illegal Harms consultation, published on 1
March 2024. In this document the tensions between the OSA and data protection
are most evident. In some areas the ICO overtly took issue with Ofcom’s
approach.
What are some of the potential areas of tension?
Illegality risk assessment
The Ofcom consultation suggests that for the illegality risk
assessment required under S.9 OSA service providers should, among other things,
consider the following ‘core input’ to the risk assessment:
“assess any other evidence they
hold that is relevant to harms on their service. This could include any
existing harms reporting, research held by the service, referrals to law
enforcement, data on or analysis of user behaviour relating to harms or product
testing. Any types of evidence listed under Ofcom’s enhanced inputs (e.g. the
results of content moderation, product testing, commissioned research) that the
business already collects and which are relevant to the risk assessment, should
inform the assessment. In effect, if the service already holds these inputs,
they should be considered as core inputs.” [Table 9.4]
Ofcom adds that:
“… any use of users’ personal
data (including any data that is not anonymised), will require services to
comply with their obligations under UK data protection law. Services will need
to make judgments on the data they hold to ensure it is processed lawfully,
including providing appropriate transparency to users when the data is
collected or further processed.” [Table 9.4]
The topic of core inputs caught the eye of the ICO, which
observed:
“A key data protection
consideration when processing personal data for risk assessment is the data
minimisation principle set out in Article 5(1)(c) of the UK GDPR. This requires
the personal data that services process to be adequate, relevant and limited to
what is necessary in relation to the purposes for which it is processed. This
means that services should identify the minimum amount of personal data they
need to fulfil their purpose.” [p.7 – p.8]
Illegality judgements and data minimisation
Data minimisation is, more generally, an area of potential tension
between the two regimes.
The Act requires service providers to make judgements about
the illegality of user content. The less information is available to a service
provider, the greater the risk of making arbitrary judgements (with potential
ECHR implications). But the more information is retained or collected in order
to make better judgements, the greater the potential conflict with the data
minimisation principle (UK GDPR Article 5(1)(c)).
Section 192 of the OSA requires service providers to make
illegality judgments on the basis of all relevant information reasonably
available to a service provider. Ofcom’s Illegal Harms consultation document suggests that what is regarded as “reasonably available” may be limited by the
constraints of data protection law:
“However, we recognise that in
certain instances services may have access to information, which is relevant to
a specific content judgement, but which is not either typically available to
all services, which would require significant resources to collect, or the use
of which would not be lawful under data protection or privacy laws. When making
illegal content judgments, services should continue to have reasonable regard
to any other relevant information to which they have access, above and beyond
what is set out in the Illegal Content Judgements Guidance but only so long as
this information is processed lawfully, including in particular in line with
data protection laws.” [26.27]
The ICO cited this (and a related more equivocal passage at
[A1.67]) as an example of where the Ofcom guidance is “less clear about the
approach that services should take to balancing the need to make [illegal content
judgements] with the need to comply with data minimisation.” The ICO said:
“The data minimisation principle
requires that personal data being processed be relevant, adequate, and limited
to what is necessary. Where an [illegal content judgement] can be made
accurately without the need to process the additional personal data held by a
service it would not be necessary for a service to process this information
under data protection law. …
The text could also clarify that
services may not always need to consult all available information in every
instance, if it is possible to make an accurate judgement using less
information.” [page 25)
There is, however, a lurking paradox of unknown unknowns. For
an offence of a kind for which factual context is important, the service provider knows
that relevant contextual information could exist, but does not know if it does
exist. Such information (if it does exist) may or may not be available to the
service provider: it could be wholly off platform and beyond its knowledge; or
it could be accessible on the platform in principle, but potentially
constrained by data protection law.
Without knowing whether further relevant contextual
information in fact exists, how is a provider to determine whether it is able to
make an accurate judgement with only the information that it knows about? How can a
provider know whether further relevant information exists without going looking
for it, potentially breaching data protection law in the process? The ICO
Guidance says:
“You are complying with the data
protection minimisation principle, as long as you can demonstrate that using
[other contextual] information is: - necessary to achieve your purpose (e.g.
because it ensures your decisions are accurate and fair ..."
Automated content moderation
The OSA contemplates that an Ofcom Code of Practice may for
some use cases recommend service providers to undertake fully automated content
moderation. The UK GDPR contains specific provisions in Article 22 about certain
solely automated processing of personal data.
The Ofcom consultation says:
“Insofar as services use
automated processing in content moderation, we consider that any interference
with users’ rights to privacy under Article 8 ECHR would be slight. Such
processing would need to be undertaken in compliance with relevant data
protection legislation (including, so far as the UK GDPR applies, rules about
processing by third parties or international data transfers).” [12.72]
Similar statements are made elsewhere in the consultation.
The ICO disagrees with the first sentence:
“From a data protection
perspective, we do not agree that the potential privacy impact of automated
scanning is slight. Whilst it is true that automation may be a useful privacy
safeguard, the moderation of content using automated means will still have data
protection implications for service users whose content is being scanned.
Automation itself carries risks to the rights and freedoms of individuals,
which can be exacerbated when the processing is carried out at scale.”
[p.12]
It goes on:
“Our guidance on content
moderation is clear that content moderation involves personal data processing
at all stages of the moderation process, and hence data protection must be
considered at all stages (including when automated processing is used, not just
when a human looks at content). [p.12]
The ICO took the view that from a data protection law
perspective, Ofcom’s proposed safeguards for the
three recommended automated content moderation measures (CSAM perceptual hash
matching, CSAM URL matching and fraud fuzzy keyword detection) are incomplete.
As to UK GDPR Article 22, the ICO commented in relation to
the series of recommended measures that involve (or may involve) automated
processing:
“The automated content moderation
measures have the potential to engage UK GDPR Article 22, particularly measures
4G [Hash matching for CSAM] and I [Keyword detection regarding articles for use
in frauds]. Article 22 of the UK GDPR places restrictions about when services
can carry out solely automated decision-making based on personal information
where the decision has legal or similarly significant effects. … To achieve
coherence across the regimes it is important that the recommended code measures
are compatible with UK GDPR Article 22 requirements.”
It is worth noting that UK GDPR Article 22 permits such
solely automated decisions to be made where required by “domestic law”,
provided that this sets out suitable safeguards. The new Data (Use and Access) Bill includes some changes to the Article 22 provisions.
Accuracy of illegal content judgements
Data protection law also encompasses an accuracy principle –
UK GDPR Article 5(1)(d). The ICO Guidance assesses the application of this
principle to be limited to the accuracy of facts underlying content judgements
and to accurate recording of those judgements. However, the ICO Guidance also appears
to suggest that the separate fair processing principle (UK GDPR Article
5(1)(a)) could have implications for the substantive accuracy of judgements
themselves:
“You are unlikely to be treating
users fairly if you make inaccurate judgements or biased moderation decisions
based on their personal information."
If substantive accuracy could be both a fair processing matter
and an OSA issue, how might this manifest itself? Some examples are:
Reasonably available information As already noted,
S.192 of the OSA requires a service provider to make illegality judgements on
the basis of all information reasonably available to it. Again as already
noted, what is regarded as ‘reasonably available’ may be affected by data
protection law, especially the data minimisation requirement to demonstrate
that processing the data is necessary to achieve the purpose of an accurate and
fair illegality judgement.
As to necessity, could the ‘unknown unknowns’ paradox (see above) come into play: how can a provider know if contextual information is
available and relevant to the accuracy of the judgement that it has to
make without seeking out the information? Could necessity be established simply
on the basis that it is the kind of offence to which contextual information (if
it existed) could be relevant, or would there have to be some justification
specific to the facts of the matter under consideration, such as an indication
that further information existed?
Generally, in connection with the information that service
providers may use to make illegal content judgements, Ofcom says:
“When making illegal content
judgements, services should continue to have reasonable regard to any other
relevant information to which they have access, above and beyond what is set
out in the Illegal Content Judgements Guidance but only so long as this information
is processed lawfully, including in particular with data protection laws”;
[26.27]
One example of this area of potential tension between the
OSA and data protection is reference back to previous user complaints when
making a judgement about content. In its section on handling user complaints, Ofcom’s
consultation says:
“To the extent that a service
needed to retain information to process complaints, this may include personal
data. However, we are not proposing to recommend that services should process
or retain any extra information beyond the minimum needed to comply with duties
which are clearly set out on the face of the Act. To the extent that services
choose to do so, this data would be held by the service subject to data
protection laws.” [16.113]
Elsewhere, Ofcom says that “depending on the context,
reasonably available information may include … complaints information” ([26.26,
A1.66], subject to the proviso that:
“processing some of the types of
information (‘data’) listed below has potential implications for users’ right
to privacy. Services also need to ensure they process personal data in line
with data protection laws. In particular, the likelihood is high that in
considering U2U content a service will come across personal data including
special category data and possibly criminal offence data, to which UK data
protection laws apply.” [26.25]
The ICO Guidance said:
“Data minimisation still applies
when services use personal information to make illegal content judgements under
section 192 of the OSA. Under data protection law, this means you must use
personal information that is proportionate and limited to what is necessary to
make illegal content judgements. …
Moderation of content can be
highly contextual. Sometimes, you may need to use other types of personal
information (beyond just the content) to decide whether you need to take
moderation action, including users’ … records of previous content policy violations.
…
You are complying with the data
minimisation principle, as long as you can demonstrate that using this
information is:
-
necessary to achieve your purpose (eg because it
ensures your decisions are accurate and fair);
-
and no less intrusive option is available to
achieve this.”
The ICO submission to the Ofcom consultation says:
“Paragraphs 16.26-27 of the
consultation document state that Ofcom decided not to include a recommendation
for services to keep complaints data to facilitate appeals as part of this
measure. However, other consultation measures require or recommend the further
use of complaints data, for example the risk assessment guidance, illegal
content judgements guidance… . We think that it is important that the overall
package of measures make clear what information Ofcom considers necessary for
services to retain and use to comply with online safety obligations. This will
help services to feel confident about complying with their data protection
obligations.” [page 18]
Assuming that a service provider does have access to all
relevant information, if substantive accuracy of an illegal content judgement were an aspect of the data protection fair processing principle, might that connote
a degree of certainty that differs from the Act’s ‘reasonable grounds to infer’
standard in S.192 or the ‘awareness’ standard in Section 10(3)(b) (if they be different from each other)?
NCA reporting
Related to both automated processing and accuracy are the
ICO’s submission comments on the obligation under S.66 for a service provider
who becomes aware of previously unreported UK-linked CSAM to report it to the
National Crime Agency.
The Ofcom consultation notes:
“Interference with users’ or
other individuals’ privacy rights may also arise insofar as the option would
lead to reporting to reporting bodies or other organisations in relation to
CSAM detected using perceptual hash matching technology – for example, that a
user was responsible for uploading content detected as CSAM to the service.”
[14.80]
Ofcom goes on:
“As explained above, use of
perceptual hash matching can result in cases where detected content is a false
positive match for CSAM, or a match for content that is not CSAM and has been
wrongly included in the hash database. These cases could result in individuals
being incorrectly reported to reporting bodies or other organisations, which
would represent a potentially significant intrusion into their privacy.”
[14.84]
It adds:
“It is not possible to assess in
detail the potential impact of incorrect reporting of users: the number of
users potentially affected will depend on how services implement hash matching;
while further details of the reporting requirements under the Act are to be
specified by the Secretary of State in secondary legislation. However, the
option includes principles and safeguards in relation to the hash database, the
configuration of the technology, and the use of human moderators that are
designed to help secure that the technology operates accurately. … [14.85]
In addition, reporting bodies
have processes in place to triage and assess all reports received, ensuring
that no action is taken in cases relating to obvious false positives. These
processes are currently in place at NCMEC and will also be in place at the
Designated Reporting Body in the NCA, to ensure that investigatory action is
only taken in appropriate circumstances.” [14.86]
The ICO takes a less sanguine view of the consequences of
reporting a false positive to the NCA:
“Ofcom refers to the principles
and safeguards in the content moderation measures as being safeguards that are
designed to help secure that the technology operates accurately in connection
with user reports to the NCA. Accuracy is also a relevant consideration in data
protection law. The accuracy principle requires that services take all
reasonable steps to ensure that the personal data they process is not incorrect
or misleading as to any matter of fact. Where content moderation decisions
could have significant adverse impacts on individuals, services must be able to
demonstrate that they have put sufficient effort into ensuring accuracy.
We are concerned that the
safeguards in measure 4G do not differentiate between the level of accuracy
that is appropriate for reports to the NCA (which carries a particular risk of
serious damage to the rights, freedoms and interests of a person who is incorrectly
reported) and other significant but potentially less harmful actions such as
content takedown.
Our reading of measure 4G is that
it could allow for the content moderation technology to be configured in such a
way that recognises that false positives will be reported to the NCA. Whilst we
acknowledge that it may not be possible to completely eliminate false positives
being reported, we are concerned that a margin for error could be routinely
“factored into” a service’s systems and processes as a matter of course.
This is unlikely to be compatible
with a service taking all reasonable steps to ensure that the personal data it
processes is not inaccurate.”
One point of particular interest is the ICO’s apparent distinction
between the level of accuracy for content takedown and that for reporting to
the NCA. Both Section 10(3)(b) (the takedown obligation) and Section 66 (as
interpreted by Section 70) use the same language to trigger the respective
obligations (emphasis added):
- A duty to operate a service using proportionate systems and processes designed to where the provider is alerted by a person to the presence of any illegal content, or becomes aware of it in any other way, swiftly take down such content. (S.10(3)(b)
-
… must operate the service using systems and
processes which secure (so far as possible) that the provider reports all detected
and unreported CSEA content present on the service to the NCA. … CSEA content
is “detected” by a provider when the provider becomes aware of the content,
whether by means of the provider’s systems or processes or as a result of another
person alerting the provider. (S.66/70)
The ICO argument appears to suggest that data protection
considerations should inform the construction of the sections, with the
result that the same word ‘aware’ in the two OSA provisions would connote different
levels of confidence in the accuracy of the information on which a decision is
based.
End to end encryption
The Bill was loudly and repeatedly criticised by privacy and
civil liberties campaigners for potentially threatening the ability to use end-to-end
encryption on private messaging services. The provision that gave rise to this
was what is now Section 121: a power for Ofcom to require, by notice to a U2U
service provider, use of accredited technology to identify and swiftly take
down, or prevent individuals encountering, CSEA content, whether communicated
publicly or privately by means of the service. Service providers would have the
option of developing or sourcing their own equivalent technology.
Under Section 125, a notice requiring the use of accredited technology
is to be taken as requiring the provider to make such changes to the design or
operation of the service as are necessary for the technology to be used
effectively.
The concern with these provisions was that the effect of a
notice could be to require a private messaging provider to cease using E2E
encryption if it was incompatible with the technology required by the notice.
The S.121 power is self-standing, separate from the Act’s
safety duties on service providers. It will be the subject of a separate Ofcom
consultation, scheduled for December 2024.
Concomitantly, Ofcom is prevented from using its safety duty
enforcement powers to require proactive technology to be used by a private communications
service (S.136(6)). Broadly speaking, proactive technology is content
identification technology, user profiling technology, or behaviour
identification technology.
That is also reflected in the Schedule 4 restrictions on
what Ofcom can recommend in a Code of Practice:
“Ofcom may not recommend in a
Code of Practice the use of the technology to analyse user generated content
communicated “privately”, or metadata relating to user-generated content communicated
“privately” [14.14]
Thus, in effect, the Act’s safety duties cannot be
interpreted so as to require a private communications service to use proactive
technology. That is a matter for the self-standing S.121 power.
The Ofcom consultation states that E2E encryption is not
inherently bad. It goes on to acknowledge the benefits of E2E encryption:
“The role of the new online
safety regulations is not to restrict or prohibit the use of such
functionalities, but rather to get services to put in place safeguards which
allow users to enjoy the benefits they bring while managing risks
appropriately” [Vol 2, Introduction]
Nevertheless, it also cites E2E encryption as a
risk factor. For instance, end-to-end encryption poses the risk that “offenders
often use end-to-end encrypted services to evade detection” [Vol 2,
Introduction] and “end-to-end encryption guarantees a user’s privacy and
security of messages, but makes it harder for users to moderate for illegal
content.” [6.12]; and “Private messaging services with encryption are
particularly risky, as they make the exchange of CSAM harder to detect.”
[6C.139] “If your service allows encrypted messaging, we would expect you to
consider how this functionality can be used by potential perpetrators to avoid
monitoring of communications while sharing illegal content such as CSAM or
conducting illegal behaviour.” [Table 14] The theme is repeated numerous times
in relation to different offences.
In relation to its specific proposal for ‘hash matching’ to
detect and remove known CSAM (Child Sexual Abuse Material), Ofcom says:
“Consistent with the restrictions
in the Act, this proposal does not apply to private communications or end-to-end
encrypted communications. We are not making any proposals that would involve
breaking encryption. However, end-to-end encrypted services are still subject
to all the safety duties set out in the Act and will still need to take
steps to mitigate risks of CSAM on their
services” [Overview]
The ICO did not challenge Ofcom’s evidence bases for
concluding that E2E encryption was a risk. However, it said:
“We are concerned that the
benefits of these functionalities are not given enough emphasis in the risk
assessment guidance and risk profiles (Annex 5). These are the documents that
U2U services are most likely to consult on a regular basis. We consider that
there is a risk that the risk assessment process may be interpreted by some
services to mean that functionalities such as E2EE and anonymity/pseudonymity
are so problematic from an online safety perspective that they should be
minimised or avoided. If so, the risk assessment process could create a
chilling effect on the deployment of functionalities that have important
benefits, including keeping users safe online.”
and:
“In summary, we therefore suggest
that the guidance should make it clear that the online safety regime does not
restrict or prohibit the use of these functionalities and that the emphasis is
on requiring safeguards to allow users to enjoy the benefits while managing
risks appropriately.
Whilst ICO’s comments do not necessarily reflect tension
between the OSA and data protection regimes as such, a difference of emphasis is
detectable.
Generally, critics of the Act have long argued that requiring service providers to assess illegality is a recipe for arbitrary decision-making and over-removal of legal user content, likely to constitute an unjustified interference with the right of freedom of expression. As Ofcom and the ICO attempt to get to grips with the practical realities of the duties, data protection and privacy are now joining the fray.
No comments:
Post a Comment
Note: only a member of this blog may post a comment.