UK proposes rules to protect against anonymous online trolls

March 18, 2022

The UK Government has added two new duties to the proposed Online Safety Bill (the Bill) that are aimed at protecting people against anonymous online abuse. These measures would give users of “main social media firms” more control over who can interact with them and the type of content users see (see the Government’s press release of 25 February 2022).

What is the Online Safety Bill?

The draft Bill, published in May 2021, is the UK’s proposed regulatory framework to address illegal and harmful content online. In its current form, it will apply to all providers (subject to certain exemptions) of:

  • User-to-user services – that is, those that make user-generated content (UGC) available to other users either publically or privately. This includes social media platforms and other platforms enabling users to publish or share UGC (e.g. a blogging website).
  • Search services such as search engines, or services that include search engines.

Regulated service providers will have to, among other things:

  • Conduct risk assessments relating to illegal content accessible by all users and harmful content accessible to children (if applicable).
  • Implement measures to minimise the presence and sharing of illegal content and prevent children from accessing harmful content (if applicable).

Providers of “Category 1 services” will also have to conduct risk assessments and implement safety measures in respect of content that is harmful to adults. Regulations will specify threshold conditions for Category 1 services, with reference to the number of users and the functionality of the service. The UK Government has described these as “the largest and most popular social media sites”.

The Bill will apply to providers of regulated services both within the UK and outside the UK, if the service has a significant number of UK users, is targeted at UK users, or is accessible by UK users, and content presents a material risk of significant harm.

The Bill is currently making its way through the Parliamentary process and a number of changes (including those described below) are expected to be considered.

New duties protecting against anonymous online abuse

The two new duties will require Category 1 service providers to give users the ability to:

  • Block anonymous trolls. Providers must offer ways for users to: (i) verify their own identities; and (ii) control which other users can engage with them. The following considerations would apply:
    • Providers may choose which methods to do this. For example, identities could be verified through official identity documents, mobile number authentication or picture verification. Users could then be given options to receive messages and other content only from verified accounts.
    • The intention is not to ban anonymity online (which is recognised to be positive in many instances) or to restrict online freedom of expression (provided the content is legal); therefore the verification and blocking methods must be optional.
    • Users must be able to determine whether they wish to be exposed to content from unknown sources.
    • Guidance on verification methods will be provided by Ofcom, taking into account data protection concerns, accessibility to vulnerable users and technical feasibility.
  • Opt out of seeing harmful content. Providers must offer tools to adult users to choose whether to be exposed to content that is legal but otherwise harmful. The Online Safety Bill already requires regulated service providers to remove illegal content. Many platforms also prohibit legal but harmful content in their user terms and Acceptable Use Policies, but are not legally required to remove such content or the users who generate it. These new tools will need to enable users to control not only whether or not they see harmful content, but the types of harmful content that they see. For example, they might enable a user to select topics that will not be recommended or promoted to them, or over which sensitivity screens appear.

A number of social media platforms already enable authentication of the official accounts of celebrities and brands. However only certain accounts are eligible for verification and the explanations suggest that at least some level of manual assessment is conducted. Mass verification of a general user base will undoubtedly need to be largely automated.

Some social media platforms currently have sensitive content control features, while others also place warnings over certain graphic content.

It will be interesting to see how existing features align with the precise legal requirements when a revised draft of the Online Safety Bill is published in due course.