By Nathan Grayson
Last weekend, a gruesome scene unfolded live on Twitch as a shooter opened fire in a Buffalo, New York, supermarket.
Ultimately, 10 people were killed. Since then, millions have viewed videos of the cold-blooded carnage on platforms such as Facebook. But at the time, just 22 concurrent viewers tuned in. Twitch pulled the plug less than two minutes after the shooter opened fire.
Twitch managed to move quickly where others faltered – especially the comparably much larger Facebook – on content that was live, rather than prerecorded.
Facebook also moved to immediately delete copies of the live-streamed video, but a link to the footage from lesser-known site Streamable garnered 46 000 shares on Facebook and remained on the site for more than 10 hours.
In a statement to The Washington Post earlier this week, Facebook parent company Meta said it was working to permanently block links to the video but had faced “adversarial” efforts by users trying to circumvent its rules to share the video.
Though spokespeople for Twitch were hesitant to offer exact details on its movements behind the scenes for fear of giving away secrets to those who might follow in the Buffalo shooter’s footsteps, it has provided an outline.
“As a global live-streaming service, we have robust mechanisms established for detecting, escalating and removing high-harm content on a 24/7 basis,” Twitch VP of trust and safety Angela Hession told The Washington Post in a statement after the shooting.
“We combine proactive detection and a robust user reporting system with urgent escalation flows led by skilled human specialists to address incidents swiftly and accurately.”
She went on to explain how Twitch is collaborating with law enforcement and other platforms to prevent new uploads of the video and minimize longer-term harm.
“We are working closely with several law enforcement agencies such as the FBI, Department of Homeland Security, and NYPD Cyber Intelligence Unit,” she said.
“In addition to working with law enforcement and the [Global Internet Forum to Counter Terrorism], we’ve been working closely with our industry peers throughout this event to help prevent any related content from spreading and minimize harm.”
In an interview conducted a week before the shooting, Hession and Twitch global VP of safety ops Rob Lewington provided additional insight into how the platform turned a corner after a bumpy handful of years – and where it still needs to improve. (Twitch is owned by Amazon, whose founder, Jeff Bezos, owns The Post.)
First and foremost, Hession and Lewington stressed that Twitch’s approach to content moderation centres human beings; while modern platforms like Twitch, YouTube and Facebook use a mixture of automation and human teams to sift through millions of uploads per day, Lewington said Twitch never relies solely on automated decision-making.
“While we use technology, like any other service, to help tell us pro-actively what’s going on in our service, we always keep a human in the loop of all our decisions,” said Lewington, noting that in the past two years, Twitch has quadrupled the number of people it has on hand to respond to user reports.
This, Hession and Lewington said, is crucial on a platform that, more so than any other, orbits around live content. Unlike on YouTube – where the bulk of the business is in prerecorded videos that can be screened before uploading and deleted if need be – Twitch is a place where most of the damage from violent or otherwise rule-breaking footage is done the moment it happens.
That in mind, Lewington touted an internal stat: 80% of user reports, he said, are resolved in under 10 minutes. On a platform with 9 million streamers in total and over 200 million lines inputted into chat per day, that takes a well-oiled machine.
Twitch did not reach this point without bad actors throwing a few wrenches into the works, however. The platform’s current approach to content moderation is, in some ways, a product of several highly public, painful lessons.
In 2019, it combated and ultimately sued users who repeatedly posted reuploads of the Christchurch mosque shooting, which had originally been streamed on Facebook. Later that same year, a different gunman used Twitch to broadcast himself killing two people outside a synagogue in the German city of Halle.
Twitch was not able to react to either of these massacres with the same level of rapidity as the Buffalo shooting; it took the platform 35 minutes to bring down the original stream of the Halle shooting, and an auto-generated recording was viewed by 2,200 people.
As in those prior instances – in which the shooters spoke of “white genocide” and a desire to kill “anti-whites,” respectively – racism was a key motivator in the Buffalo shooter’s rampage.
Twitch has struggled with racism over the years, with racist abuse in chat remaining a problem, albeit one streamers have significantly more tools to combat than they did back in, say, 2016, when a Black pro “Hearthstone” player had his breakout moment ruined by a flood of racist comments and imagery – all while his parents watched.
Still, bad actors have evolved with the times. Late last year, Twitch was overwhelmed by a plague of “hate raids,” in which trolls flooded streamers’ chats with bot-powered fake accounts that spammed hateful messages.
These attacks primarily targeted streamers who were Black or otherwise marginalised. It took months for Twitch to get them under control, with streamers feeling so dissatisfied that they launched a hashtag campaign and site-wide strike pleading for the company to “do better.”
Hession acknowledged that communication has faltered in key moments: “I empathise,” she said. “We’re trying to strike that better balance of telling our community [what we’re doing] while making sure we’re protecting them so the bad actors don’t game the system even more. … We have to do a better job of messaging that we do listen and we’re trying to always do the right thing for our global community.“
Twitch took its share of knocks when hate raids were at their apex, but Hession feels like the platform is stronger for it. She pointed to features that were rolled out during or after that time frame: proactive detection of bots – which she said was in the works even before hate raids began – phone verification for chat and suspicious user detection. These tools, combined with educational resources that keep streamers up to speed on their options, have made bot-based hate raids significantly more difficult for malicious users to conduct.
This culminated in a significantly faster response to a far-right incursion earlier this year. In March, users from a streaming service called Cozy.tv – owned by white nationalist Nick Fuentes, who has recently taken to calling the Buffalo shooting a “false flag” – descended upon LGBTQIA+ Twitch streamers and bombarded them with homophobic messages. These users would then broadcast Twitch streamers’ incensed reactions to their home-brewed hate raids on Cozy.tv for each other’s amusement. This time, Twitch resolved the problem in just 24 hours.
“We reached out much more quickly to the community to articulate, ‘Here are the safety features that can be put on your channels,'” Hession said.
“And when we saw that people were using the channel-level safety features, the bad actors quickly moved on. They could no longer create the harm they wanted. We also quickly leaned in with the legal team to find out who these actors were. As you saw, it stopped very quickly.”
Hession and Lewington repeatedly referenced the importance of human intervention in Twitch’s moderation decisions, but automation still plays a role.
While Twitch has been reticent to discuss it publicly, several former Twitch employees told The Post that the platform employs machine learning to detect subject matter like explicit pornography, which used to slink onto the site with relative frequency. It uses that same technology to detect real-life violence as well, though that has proved a much tougher nut to crack.
“There just isn’t much data out there like the shooting to train systems on, whereas there is a lot of porn out there to train systems on,” said a former Twitch employee who spoke on the condition of anonymity because they were not authorized to speak on these matters publicly.
“Combining that with the fact that many video games have engineers spending a lot of time to make their products look as realistic as possible just makes it a hard problem to solve. By ‘hard problem,’ I mean several problems, namely: ‘Does what I am looking at look like violence?’ ‘Does it look like a known video game?’ ‘Does it look like video game violence?’ And being able to answer questions like that in very short amounts of time.“
Twitch’s reaction to the Buffalo shooting was faster than anybody else’s, but users still managed to record the stream and distribute copies to a multitude of other platforms.
The company continues to collaborate with the likes of YouTube, Facebook and Twitter as part of the Global Internet Forum to Counter Terrorism, which has allowed participating organizations to pool data on different versions of the Buffalo shooting video and remove them quickly. But there are still loopholes bad actors can exploit.
“This work will never be done,” said Hession, “and we will continue to study and improve our safety technology, processes and policies to protect our community.”