Is social media regulation on the way?

April 22, 2021

So far, 2021 has seen some social media businesses implementing content takedowns, rolling internal reforms and banning high-profile individuals and applications from using their services. It has caused some tech commentators to question recently whether this could be a defining moment in determining how social media businesses moderate content published on their platforms.

Some are calling for tougher regulation in order to make BigTech more accountable for content that appears on their platforms. For example, in the UK:

  • A government cabinet minister has acknowledged explicitly that recent events “raise regulation questions”1. The UK government accordingly plans to introduce an Online Safety Bill this year, which - reportedly - will introduce a new duty of care for a number of online service providers and will determine what steps must be taken to address illegal and harmful content.
  • The UK’s first national action plan for the safety of journalists was recently published, which (among other things) includes measures such as commitments from social media platforms to respond promptly to complaints of threats to journalists’ safety.2
  • The UK government is reviewing whether the criminal law can more effectively address online abuse.

The UK is not alone in initiatives such as these, as there are similar calls for greater regulation of BigTech in the United States, Australia and the European Union.


The US position

In the United States, section 230 of the 1996 Communications Decency Act (47 U.S.C. § 230) has once again been called in to question. Scrutiny of section 230 is not new, and the technology industry has long held that it is a vital protection.

Section 230 generally protects the owners of any “interactive computer service” from liability for content posted by third parties, and entitles internet platforms and social media sites to moderate (but not substantively change) content from third parties without being held liable for what they say.

Because most social media sites are not run by government entities, the freedom of speech requirements of the First Amendment to the U.S. Constitution do not apply - social media sites are free to permit or to deny access to anyone as long as that decision does not violate laws applicable to private entities.

In February 2021, the Safeguarding Against Fraud, Exploitation, Threats, Extremism and Consumer Harms (SAFE TECH) Act was introduced into the US Senate (S. 299). It aims to limit the scope of protection offered by section 230 and to enable social media businesses to be held accountable for enabling cyber-stalking, targeted harassment and discrimination on their platforms. As at the date of writing it is not yet clear what will come of the proposal within Congress.

To track further developments in social media law in North America, see our Social Media Law Bulletin.


These are just a few examples of initiatives that have been proposed across the globe. Speaking generally of these, the approach to regulating BigTech appears to be a multi-faceted one, with the focus not just on content published on platforms. For example:

  • EU regulators are considering introducing conditions that would require BigTech to pay for news services, and to require them to inform publishers about changes to how they rank news stories on their sites.3
  • The European Union’s proposed Digital Markets Act aims to prohibit unfair practices such as “self-preferencing” (for example, when BigTech prioritise their own goods / apps / services in product displays), and would introduce interoperability and data sharing obligations. See our blog, Shaken, not stirred: EU mixes big tech regulation and antitrust, for more detail.
  • Australia recently passed the News Media Bargaining Code with the aim of requiring certain BigTech companies to pay for news content on their platforms. The code also requires Tech platforms to inform publishers of changes to the algorithms that determine which stories are presented. There were suggestions that BigTech might withdraw from Australia, with one such business introducing a temporary ban on news content. However, individual deals have now been agreed between the BigTech and local media companies.4

It remains to be seen how the various global proposals will unfold, and whether an implicit consensus of sorts on an appropriate regulatory response will be reached internationally. But there is no doubt that change is coming in many jurisdictions for BigTech and social media platforms. The question is how much change?

 

 

[1] https://www.bbc.co.uk/news/uk-politics-55609903

[2] https://www.gov.uk/government/publications/national-action-plan-for-the-safety-of-journalists/national-action-plan-for-the-safety-of-journalists

[3] https://www.ft.com/content/4c40c890-afd3-40a3-9582-78a66c37a8af

[4] https://www.bbc.co.uk/news/world-australia-56163550