The Changing Face of the Online World: Key regulatory considerations

by Carolina Baigrie, Research Executive

The digital world has brought with it countless benefits, from instant messaging to versatile working opportunities. But the accompanying negative effects have become increasingly difficult for policymakers to ignore. There are concerns around social media and online harms. The EU and the UK are the first to draft regulation in this area; the EU with the Digital Services Act (DSA) and the Digital Markets Act (DMA), and the UK with the Online Safety Bill. But can incoming regulation effectively protect users without unfairly limiting their online experience?

It is no easy feat to define harmful content; after all, what is harmful to one person is not to another. The Online Safety Bill defines it as having a “significant adverse physical or psychological impact”. But for many, social media is a platform to engage in debate. The naysayers to regulation believe that regulation of online speech limits this debate and impinges on free speech rights. Perhaps it is not in the remit of corporations or governments to determine what may or may not be said online, but some believe that an external regulator such as Ofcom, in the Online Safety Bill’s case, is better equipped to draw the line between encouraging debate and preventing harmful discussion that risks real-life consequences.

Digital platforms enable the rapid spread of content and information. For many, technology has been a champion of the pandemic, enabling day-to-day life to continue despite remaining inside our homes. However, the fast spread of mis- and dis-information via social media platforms has also been the source of much attention. Viral content can be harmful, and the speed at which it spreads means that hundreds of thousands of people can be affected by this content before moderators pick it up, as demonstrated for example by the viral videos that followed the 2019 Christchurch massacre. Incoming regulation from both the UK and the EU will impose duties of care onto online platforms. This will require digital service providers to reflect on how content is shared on their platforms, as they themselves take on the responsibility of the publisher.

Incoming regulation, while no doubt necessary in some form, must be careful not to create barriers to entry. The scale and power of Big Tech has led to much of the issues that we see today, and regulation must be conscious not to discourage new entrants due to additional reporting requirements that only established tech giants are equipped to cope with. Commentators on the whitepaper preceding the Online Safety Bill claimed the Bill “must not entrench the market power of the largest platforms by increasing barriers to entry for competitors. Ultimately, this would harm consumers.” The DMA is particularly cautious of this issue, taking a more targeted approach in its regulation of the Big Tech ‘gatekeepers.’ Ofcom will have to regularly assess the impact on competition following the implementation of the Online Safety Bill.

Recent developments have highlighted the fact that many digital services and social media companies prioritise user engagement and advertising incentives over safety concerns. Incoming regulation must ensure tech companies shift their focus to user safety and user experience. However, the scale and complexity of the digital world means that incoming regulation will inevitably fail to initially tick all the boxes. Future iterations are to be expected and welcomed, both from the UK and the EU themselves, as well as other jurisdictions. Online safety is a cross-border issue which differs depending on political and social contexts. A variety of approaches will be paramount in developing appropriate and adequate regulation in this area if unintended consequences are not to cause more harm than good.

This article appeared in the Cicero/amo December 2021 newsletter.

Carolina Baigrie

Research Executive