YouTube dealt with an “unprecedented volume” of videos after last week’s mass shootings in New Zealand, as the platform struggled to remove the videos with the images, YouTube’s product manager said. The Washington Post.
Friday’s killings at two mosques in Christchurch were recorded and posted on social media around the world as part of a scheme that was apparently designed to spread the images online. As the footage progressed across the Internet, it was uploaded repeatedly. Yesterday, Facebook said it removed 1.5 million videos of the attack in the first 24 hours after the shooting.
The videos were altered to evade detection.
While YouTube did not say precisely how many videos it ultimately removed, the company faced a flood of videos after filming, with moderators working through the night to remove tens of thousands of videos with the recording, said product manager Neal Mohan. to the agency. Send. Some uploads were reportedly tweaked to evade detection, as users slightly tweaked images to prevent automated tools from flagging it.
Copies were reportedly added as fast as one per second, and eventually the platform disabled some searches to limit visibility. YouTube also cut some human review functions to speed up the process, Mohan told the Send. (The service said Friday it was sending potentially newsworthy videos containing clips of the material to humans for review.)
Social media companies are facing new questions about the moderation of the platform after the shooting, which spreads not only into the largest services, but also into the darkest corners of the internet. While the services said the incident was unprecedented, the spread of the filming video has led to calls from lawmakers for companies to do more to police their rigs.