The Information Commissioner’s Office has recently published its submission to Ofcom’s consultation on additional safety measures under the Online Safety Act.
The
consultation is the second instalment of Ofcom’s iterative approach to writing Codes
of Practice for user-to-user and search service providers. The first round
culminated in Codes of Practice that came into force in March 2025 (illegal content)
and July 2025 (protection of children). A service provider that implements the
recommendations in an Ofcom Code of Practice is deemed to comply with the
various safety duties imposed by the Act.
The
recommendations that Ofcom proposes in this second instalment are split almost equally between content-related and non-content measures (see Annex for a
tabular analysis). Content-related measures require the service provider to
make judgements about items of user content. Non-content measures are not
directly related to user content as such.
Thus the non-content
measures mainly concern age assessment, certain livestreaming features and
functionality that Ofcom considers should not be available to under-18s, and default
settings for under-18s. Two more non-content measures concern a livestream user
reporting mechanism and crisis response protocols.
The content-related measures divide into reactive (content moderation, user sanctions and appeals) and proactive (automated content detection in various contexts). Ofcom cannot recommend use of proactive technology in relation to user content communicated privately.
The applicability of each measure to a given service provider depends on various size, risk, functionality and other criteria set by Ofcom.
Proactive content-related measures are especially controversial, since they involve platforms deploying technology to scan and analyse users’ content with a view to it being blocked, removed, deprioritised or affected in some other way.
The ability
of such technology to make accurate judgements is inevitably open to question,
not only because of limitations of the technology itself but also because
illegality often depends on off-platform contextual information that is not
available to the technology. Inaccurate judgements result in false positives
and, potentially, collateral damage to legitimate user content.
The ICO
submissions
What does the ICO have to say? Given the extensive territory covered by the Ofcom consultation, quite a lot: 32 pages of detailed commentary. Many, but not all, the comments concern the accuracy of various kinds of proactive content detection technology.
As befits its regulatory remit, the ICO approaches Ofcom’s recommendations from the perspective of data protection: anything that involves processing of personal data. Content detection, judgements and consequent action are, from the ICO’s perspective, processes that engage the data protection accuracy principle and the overall fairness of processing.
Although the
ICO does not comment on ECHR compliance, similar considerations will inform the compatibility of some of Ofcom’s content-related proactive technology recommendations with Article 10 ECHR (freedom of expression).
The ICO’s main comments include:
- Asking Ofcom to clarify its evidence on the availability of accurate, effective and bias-free technologies for harms in scope of its "principles-based" proactive technology measures. Those harms are, for illegal content: image based CSAM, CSAM URLs, grooming, fraud and financial services, encouraging or assisting suicide (or attempted suicide); and for content harmful to children: pornographic, suicide, self-harm and eating disorder content. This is probably the most significant of the ICO's suggestions, in effect challenging Ofcom to provide stronger evidential support for its confidence that such technologies are available for all those kinds of harm.
- For Ofcom’s principles-based measures, the ICO recommends that a provider, when assessing whether a given technology complies with Ofcom’s proactive technology criteria, should have to consider the “impact and consequences” of incorrect detections, including any sanctions that services may apply to users as a result of such detections. Those may differ for different kinds of harm.
- Suggesting that Ofcom’s Illegal Content Codes of Practice should specify that services should have “particular consideration regarding the use of an unverified hash database” (as would be permissible under Ofcom’s proposed measure) for Intimate Image Abuse (IIA) content.
Before
delving into these specific points, some of the ICO’s more general observations
on Ofcom’s consultation are noteworthy.
Data
protection versus privacy
The ICO gently admonishes Ofcom for conflating Art 8 ECHR privacy
protections (involving consideration of whether there is a reasonable expectation of privacy)
with data protection.
“For example, section 9.158 of the privacy and data
protection rights assessment suggests that the degree of interference with data
protection rights will depend on whether the content affected by the measures
is communicated publicly or privately. This is not accurate under data
protection law; irrespective of users’ expectations concerning their content
(and/or associated metadata), data protection law applies where services are
processing personal data in proactive technology systems. Services must ensure
that they comply with their data protection obligations and uphold users’ data
protection rights, regardless of whether communications are deemed to be public
or private under the OSA.”
The ICO
suggests that it may be helpful for Art 8 and data protection to be considered
separately.
Data
protection, automated and human moderation
The ICO
“broadly supports” Ofcom’s proposed measures for perceptual hash-matching for
IAA and terrorism content (discussed further below). However, in this context it again takes issue
with Ofcom’s conflation of data protection and privacy. This time the ICO goes
further, disagreeing outright with Ofcom’s characterisations:
“For example, the privacy and data protection rights
assessments for both the IIA and terrorism hash matching measures state that
where services carry out automated processing in accordance with data
protection law, that processing should have a minimal impact on users’ privacy.
Ofcom also suggests that review of content by human moderators has a more
significant privacy impact than the automated hash matching process. We
disagree with these statements. Compliance with data protection law does not,
in itself, guarantee that the privacy impact on users will be minimal.
Automation carries inherent risks to the rights and freedoms of individuals,
particularly when the processing is conducted at scale.”
The ICO’s
disagreement with Ofcom’s assessment of the privacy impact of automated
processing harks back to the ICO’s comments on Ofcom’s original Illegal Harms
consultation last year. Ofcom had said:
“Insofar as services use automated processing in content
moderation, we consider that any interference with users’ rights to privacy
under Article 8 ECHR would be slight.”
The ICO
observed in its submission to that consultation:
“From a data protection perspective, we do not agree that the
potential privacy impact of automated scanning is slight. Whilst it is true
that automation may be a useful privacy safeguard, the moderation of content
using automated means will still have data protection implications for service
users whose content is being scanned. Automation itself carries risks to the
rights and freedoms of individuals, which can be exacerbated when the
processing is carried out at scale.”
Hash-matching,
data protection, privacy and freedom of expression
In relation to hash-matching the ICO
stresses (as it does also in relation to Ofcom’s proposed principles-based
measures, discussed below) that accuracy of content judgements impacts not only
freedom of expression, but privacy and data protection:
“For example, accuracy of detections and the risk of false
positives made by hash matching tools are key data privacy considerations in
relation to these measures. Accuracy of detections has been considered in
Ofcom’s freedom of expression rights assessment, but has not been discussed as
a privacy and data protection impact. The accuracy principle under data
protection law requires that personal information must be accurate, up-to-date,
and rectified where necessary. Hash matching tools may impact users’ privacy
where they use or generate inaccurate personal information, which can also lead
to unfair consequences for users where content is incorrectly actioned or
sanctions incorrectly applied.”
Principles-based
proactive technology measures - evidence of available technology
The Ofcom consultation proposes what it calls "principles-based" measures (ICU C11, ICU C12, PCU C9, PCU C10), requiring certain U2U platforms to assess available proactive technology and to deploy it if it meets proactive technology criteria defined by Ofcom.
These would apply to certain kinds of “target” illegal content and content harmful to children. Those are, for illegal content: image based CSAM, CSAM URLs, grooming, fraud (and financial services), and encouraging or assisting suicide (or attempted suicide); and for content harmful to children: pornographic, suicide, self-harm and eating disorder content.
Ofcom says that it has a higher degree of confidence that proactive technologies that are accurate, effective and free from bias are likely to be available for addressing those harms. Annex 13 of the consultation devotes 7 pages to Ofcom's evidence supporting that.
The ICO says
that it is not opposed to Ofcom’s proposed proactive technology measures in
principle. But as currently drafted the measures “present a number of questions
concerning alignment with data protection legislation”, which the ICO describes
as “important points”.
In the ICO’s view there is a “lack of clarity” in the consultation documents about the availability of proactive technology that meets Ofcom's proactive criteria for all harms in scope of the measures. The ICO suggests that this could affect how platforms go about assessing the availability of suitable technology:
“…we are concerned that the uncertainty about the effectiveness of proactive technologies currently available could lead to confusion for organisations seeking to comply with this measure, and create the risk that some services will deploy technologies that are not effective or accurate in detecting the target harms.”
It goes on to comment on the evidence set out in Annex 13:
“Annex 13 outlines some evidence on the effective deployment
of existing technologies, but this is not comprehensively laid out for all the
harms in scope. We consider that a more robust overview of Ofcom’s evidence of
the tools available and their effectiveness would help clarify the basis on
which Ofcom has determined that it has a higher degree of confidence about the
availability of technologies that meet its criteria. This will help to minimise
the risk of services deploying proactive technologies that are incompatible
with the requirements of data protection law.”
The ICO approaches this only as a matter of compliance with data protection law. Its comments do, however, bear tangentially on the argument that Ofcom’s principles-based proactive technology recommendations, lacking quantitative accuracy and effectiveness criteria, are too vague to comply with Art 10 ECHR.
Ofcom has to date refrained from proposing concrete thresholds for false positives, both in this consultation and in a previous consultation on technology notices under S.121 of the Act. If Ofcom were to accede to the ICO’s suggestion that it should clarify the evidential basis of its higher degree of confidence in the likely availability of accurate and effective technology for harms in scope, might that lead it to grasp the nettle of quantifying acceptable limits of accuracy and effectiveness?
Principles-based
proactive technology criteria – variable impact on users
Ofcom’s
principles-based measures do set out criteria that proactive technology would have
to meet. However, the proactive technology criteria are framed as qualitative
factors to be taken into account, not as threshold conditions.
The ICO does
not go so far as to challenge the absence of threshold conditions. It supports “the
inclusion of these additional factors that services should take into account”
and considers that “these play an important role in supporting the accuracy and
fairness of the data processing involved.”
However, it
notes that:
“…the factors don’t recommend that services consider the
distinction between the different types of impacts on users that may occur as a
result of content being detected as target content.”
It considers
that:
“… where personal data processing results in more severe
outcomes for users, it is likely that more human review and more careful
calibration of precision and recall to minimise false positives would be
necessary to ensure the underpinning processing of personal data is fair.”
The ICO therefore
proposes that Ofcom should add a further factor, recommending that service
providers:
“…also consider the impact and consequences of incorrect
detections made by proactive technologies, including any sanctions that
services may apply to users as a result of such detections. …This will help
ensure the decisions made about users, using their personal data, are more
likely to be fair under data protection law.”
This is all
against the background that:
“Where proactive technologies are not accurate or effective
in detecting the harms in scope of the measures, there is a risk of content
being incorrectly classified as target illegal content or target content
harmful to children. Such false positive outcomes could have a significant
impact on individuals’ data protection rights and lead to significant data
protection harms. For example, false positives could lead to users wrongly
having their content removed, their accounts banned or suspended or, in the case
of detection of CSEA content, users being reported to the National Crime Agency
or other organisations.”
That identifies the perennial problem with proactive technology measures. However, while the ICO proposal would add contextual nuance to service providers’ multi-factorial assessment of risk of false positives, it does not answer the fundamental question of how many false positives is too many. That would remain for service providers to decide, with the likelihood of widely differing answers from one service provider to the next. Data protection law aside, the question would remain of whether Ofcom’s proposed measures comply with the "prescribed by law" requirement of the ECHR.
Perceptual
hash-matching - sourcing image-based IIA hashes
Ofcom’s
recommendations include perceptual hash matching against databases of hashes,
for intimate image abuse and terrorist content.
Ofcom proposes that for IIA content hash-matching could be carried out against an unverified database of hashes. That is in contrast with its recommendations for CSAM and terrorism content hash-matching. The ICO observes:
“Indeed Ofcom notes that the only currently available
third-party database of IIA hashes does
not verify the content; instead, content is self-submitted by victims and
survivors of IIA.
Ofcom acknowledges that third party databases may contain
some images that are not IIA, resulting in content being erroneously identified
as IIA.”
Ofcom said in the consultation:
“We are not aware of any evidence of unverified hash databases being used maliciously with the aim of targeting content online for moderation. While we understand the risk, we are not aware that it has materialised on services which use hash matching to tackle intimate image abuse.”
Under Ofcom's proposals the service provider would be expected to treat a positive
match by perceptual hash-matching technology as “reason to suspect” that the
content may be intimate image abuse. It would then be expected to subject an
“appropriate proportion” of detected content to human review.
According to
Annex 14 of the consultation, among the factors that service providers should
consider when deciding what proportion of content to review would be:
“The principle that content with a higher likelihood of being
a false positive should be prioritised for review, with particular
consideration regarding the use of an unverified hash database.”
The ICO notes
that having “particular consideration regarding use of an unverified hash
database” does not appear in the proposed Code of Practice measures themselves.
It observes:
“Having regard to the use of unverified databases is an
important privacy and data protection safeguard. It is our view that due to the
increased risk of false positive detections where services use unverified hash
databases, services may need to review a higher proportion of the content
detect [sic] by IIA hash matching tools in order to meet the fairness and
accuracy principles of data protection law.”
The ICO recommends that the factor should be added to the Code of Practice.
Other ICO
recommendations
Other ICO recommendations highlighted in its Executive Summary include:
- Suggesting that additional safeguards should be outlined in the Illegal Content Judgements Guidance where, as Ofcom proposes, illegal content judgements might be made about CSAM content that is not technically feasible to review (for instance on the basis of group names, icons or bios). The ICO also suggests that Ofcom should clarify which users involved in messaging, group chats or forums would be classed as having shared CSAM when a judgement is made on the basis of a group-level indicator.
- As regards sanctions against users banned for CSEA content, noting that methods to prevent such users returning to the service may engage the storage and access technology provisions of the Privacy and Electronic Communication Regulations (PECR); and suggesting that for the purposes of appeals Ofcom should clarify whether content determined to be lawful nudity content should still be classified as ‘CSEA content proxy’ (i.e. prohibited by terms of service), since this would affect whether services could fully reverse a ban.
- Noting that implementation of tools to prevent capture and recording of livestreams, in accordance with Ofcom’s recommended measure, may also engage the storage and access technology provisions of PECR.
- Supporting Ofcom’s proposals to codify the definition of highly effective age assurance (HEAA) in its Codes of Practice; and emphasising that implementation of HEAA must respect privacy and comply with data protection law.
Most of the
ICO comments that are not included in its Executive Summary consist of various
observations on the impact of, and need to comply with, data protection law.
Annex – Ofcom’s proposed additional safety measures
|
Recommendation |
Reference |
Categorisation |
|
Livestreaming |
|
|
|
User
reporting mechanism that a livestream contains content that depicts the risk
of imminent physical harm. |
ICU D17 |
Non-content |
|
Ensure that
human moderators are available whenever users can livestream |
ICU C16 |
Reactive
content-related |
|
Ensure that
users cannot, in relation to a one-to-many livestream by a child (identified
by highly effective age assurance) in the UK: a) Comment on
the content of the livestream; b) Gift to
the user broadcasting the livestream; c) React to
the livestream; d) Use the
service to screen capture or record the livestream; e) Where
technically feasible, use other tools outside of the service to screen
capture or record the livestream. |
ICU F3 |
Non-content |
|
Proactive
technology |
|
|
|
ICU C11 |
Proactive content-related |
|
|
Assess existing proactive technology that
they are using to detect or support the detection of target illegal content
against the proactive technology criteria and, if necessary, take steps to
ensure the criteria are met. |
ICU C12 |
Proactive content-related |
|
As ICU C11, but for target content harmful
to children. |
PCU C9 |
Proactive content-related |
|
As ICU C12, but for target content harmful
to children. |
PCU C10 |
Proactive content-related |
|
Intimate
image abuse (IIA) hash matching |
|
|
|
Use
perceptual hash matching to detect image based intimate image abuse content
so it can be removed. |
ICU C14 |
Proactive
content-related |
|
Terrorism
hash matching |
|
|
|
Use
perceptual hash matching to detect terrorism content so that it can be
removed. |
ICU C13 |
Proactive
content-related |
|
CSAM Hash
matching (extended to more service providers) |
|
|
|
Ensure that
hash-matching technology is used to detect and remove child sexual abuse
material (CSAM). |
ICU C9 |
Proactive
content-related |
|
Recommender
systems |
|
|
|
Design and
operate recommender systems to ensure that content indicated potentially to
be certain kinds of priority illegal content is excluded from users’
recommender feeds, pending further review. |
ICU E2 |
Proactive
content-related |
|
User
sanctions |
|
|
|
Prepare and
apply a sanctions policy in respect of UK users who
generate, upload, or share illegal content and/or illegal content proxy, with
the objective of preventing future dissemination of illegal content. |
ICU H2 |
Reactive
content-related |
|
As ICU H2,
but for content harmful to children and/or
harmful content proxy. |
PCU H2 |
Reactive
content-related |
|
Set and
record performance targets for content moderation function covering the time
period for taking relevant content moderation action. |
ICU C4, PCU
C4 |
Reactive
content-related |
|
CSEA user
banning |
|
|
|
Ban users who
share, generate, or upload CSEA, and those who receive CSAM, and take steps
to prevent their return to the service for the duration of the ban. |
ICU H3 |
Reactive
content-related |
|
Highly
effective age assurance |
|
|
|
Definitions
of highly effective age assurance; principles that providers should have
regard to when implementing an age assurance process. |
ICU B1, PCU
B1 |
Non-content |
|
Appeals of
highly effective age assurance decisions. |
ICU D15, ICU
D16 |
Non-content |
|
Increasing
effectiveness for U2U settings, functionalities, and user support |
|
|
|
Safety
defaults and support for child users |
ICU F1 & F2 |
Non-content |
|
Crisis
response |
|
|
|
Prepare and
apply an internal crisis response protocol. Conduct and record a post-crisis
analysis. Dedicated law enforcement crisis communication channel. |
ICU C15 / PCU
C11 |
Non-content |
|
Appeals |
|
|
|
Appeals to
cover decisions taken on the basis that content was an ‘illegal content
proxy’. |
ICU D |
Reactive
content-related |
|
Appeals to
cover decisions taken on the basis that content was a ‘content harmful to
children proxy’. |
PCU D |
Reactive
content-related |
[Amended 'high' degree of confidence to 'higher' in two places. 17 Nov 2025.]
