8.5 C
London
Wednesday, November 27, 2024

Platforms Are Fighting Online Abuse—but Not the Right Kind

We are all at risk of experiencing occasional harassment—but for some, harassment is an everyday part of life online. In particular, many women in public life experience chronic abuse: ongoing, unrelenting, and often coordinated attacks that are threatening and frequently sexual and explicit. Scottish Prime Minister Nicola Sturgeon and former New Zealand Prime Minister Jacinda Ardern, for example, have both suffered widely reported abuses online. Similarly, a recent UNESCO report detailing online violence against women journalists found that Nobel Prize–winning journalist Maria Ressa and UK journalist Carole Cadwalladr faced attacks that were “constant and sustained, with several peaks per month delivering intense abuse.” 

We, two researchers and practitioners who study the responsible use of technology and work with social media companies, call this chronic abuse, because there is not one single triggering moment, debate, or position that sparks the steady blaze of attacks. But much of the conversation around online abuse—and, more critically, the tools we have to address it—focuses on what we call the acute cases. Acute abuse is often a response to a debate, a position, or an idea: a polarizing tweet, a new book or article, some public statement. Acute abuse eventually dies down.

Platforms have dedicated resources to help address acute abuse. Users under attack can block individuals outright and mute content or other accounts, moves that ensure they’re able to exist on the platform but shield them from content that they do not want to see. They can limit interactions with people outside their networks using tools like closed messages and private accounts. There are also third-party applications that attempt to address this gap by proactively muting or filtering content. 

These tools work well for dealing with episodic attacks. But for journalists, politicians, scientists, actors—anyone, really, who relies on connecting online to do their jobs—they are woefully insufficient. Blocking and muting do little for ongoing coordinated attacks, as entire groups maintain a continuous stream of harassment from different accounts. Even when users successfully block their harassers, the ongoing mental health impact of seeing a deluge of attacks is immense; in other words, the damage is already done. These are retroactive tools, useful only after someone has been harmed. Closing direct messages and making an account private can protect the victim of an acute attack; they can go public again after the harassment subsides. But these are not realistic options for the chronically abused, as over time they only remove people from broader online discourse.

Platforms need to do more to enhance safety-by-design, including upstream solutions such as improving human content moderation, dealing with user complaints more effectively, and pushing for better systems to take care of users who face chronic abuse. Organizations like Glitch are working to educate people about the online abuse of women and marginalized people while providing resources to help people tackle these attacks, including adapting bystander training techniques for the online world, pushing platform companies to improve their reporting mechanisms, and urging policy change.

But toolkits and guidance, while extremely helpful, still place the burden of responsibility on the shoulders of the abused. Policymakers must also do their part to hold platforms responsible for combating chronic abuse. The UK’s Online Safety Bill is one mechanism that could hold platforms responsible for tamping down abuse. The bill would force large companies to make their policies on removing abusive content and blocking abusers clearer in their terms of service. It would also legally require companies to offer users optional tools that help them control the content that they see on social media. However, debate of the bill has weakened some proposed protections of adults in the name of freedom of expression, and the bill still focuses on tools that help users make choices, rather than tools and solutions that work to stop abuse upstream.

Latest news
Related news