Erica Flores

Twitter proposes draft deepfake policies

On Monday, Twitter announced that it would develop a deep forgery policy and asked users to help make its final decision on the new rule.

Late last month, the announced Twitter Security Team would seek comment on what a fake and synthetic media policy would look like on the platform. In a blog post Monday referencing that announcement, Twitter’s VP of Trust and Security Del Harvey wrote that if tampered media were flagged on the platform, Twitter could end up placing a notice next to it alerting users to that was distorted, warning them that it is false before sharing it, or adding context in the form of a link or news article breaking down why others believe it is not true. Twitter could also remove the content, Harvey wrote.

It is similar to how government agencies issue new rules

At the end of the blog, Twitter instructs users to take a survey to help evaluate the platform’s options. Asks multiple-choice questions that help them decide whether the tampered video should simply be removed or marked. It’s similar to how government agencies like the Federal Communications Commission issue new rules: They first publish their drafts and ask for public comment before officials vote.

The survey questions show a company involved in whether it should have the power to decide what is true or false. It’s a problem that other platforms, like Facebook, have struggled with for the past two months. Much of the content moderation discussion we’re seeing now came to a head last May when a video of House Speaker Nancy Pelosi was distorted to make her look drunk while circulating on social media like Facebook, Twitter and YouTube. At that point, YouTube removed it. Facebook sent it to a third-party fact-checker and placed a news coverage discrediting the video next to it and warned users that it was fake before sharing it. Twitter left him standing.

The debate over whose responsibility it is to decide what is true and false has only intensified in recent weeks after Facebook decided not to remove deceptive or false ads from politicians. After lawmakers called CEO Mark Zuckerberg and his company about this decision, Twitter announced that it would ban all political ads on November 22.

Twitter didn’t have a bogus policy when the Pelosi video went viral, and now it appears the company is moving to change that. On November 27, Twitter will close its comment period and announce an official policy 30 days before its launch.