Hate speech is still too easy to find in ...

Hate speech is still too easy to find in …

Shortly after the Pittsburgh synagogue shooting, I noticed that the word “Jews” was trending on Twitter. As a social media researcher and educator, I was concerned that violence would spread online, as it has in the past.

The activity of alleged synagogue shooters on Gab’s social media site has drawn attention to the site’s role as a hateful alternative to more common options like Facebook and Twitter. Those are among the social media platforms that have vowed to combat hate speech and abuse online on their sites.

However, as I explored online activity after the shooting, I quickly realized that the problems are not only found on sites like Gab. Rather, hate speech is still easy to find on major social media sites, including Twitter. I also identified some additional steps the company could take.

Incomplete responses to new hate terms

He expected new threats to emerge online around the Pittsburgh shooting, and there were signs that it was happening. In a recent anti-Semitic attack, the leader of the Nation of Islam, Louis Farrakhan, used the word “termite” to describe Jews. I looked up this term, knowing that racists would likely use the new insult as a keyword to avoid detection when expressing anti-Semitism.

Twitter had not suspended Farrakhan’s account in the wake of another of his anti-Semitic remarks, and Twitter’s search function automatically suggested that he might be looking for the phrase “termites devour bullets.” That turns the Twitter search box into a poster of hate.

However, the company had apparently adjusted some of its internal algorithms, because my search results did not show tweets with anti-Semitic uses of the word “termite.”

Unnoticed posts for years.

As I continued my search for hate speech and called for violence against Jews, I found even more disturbing evidence of shortcomings in Twitter’s content moderation system.

In the wake of the 2016 US election and the discovery that Twitter was being used to influence the election, the company said it was investing in machine learning to “detect and mitigate the effect on users of Internet activities. fake, coordinated and automated account. “

From what I found, these systems have not even identified violent threats and very simple, clear and straightforward hate speech that have been on their site for years.

When I reported on a tweet posted in 2014 that advocated killing Jewish people “for fun,” Twitter rejected it the same day, but its standard automated Twitter prompt gave no explanation as to why it hadn’t been touched for over four years. .

He hates system games.

When I reviewed the obnoxious tweets that hadn’t been captured after all those years, I noticed that many did not contain text, the tweet was just an image.

Without text, tweets are harder to find for users and Twitter’s own hateful identification algorithms. But users who specifically search for hate speech on Twitter can scroll through the activity of the accounts they find, seeing even more hateful messages.

Twitter seems to be aware of this problem: users who report a tweet need to review some other tweets from the same account and send them at the same time. This ends up submitting a bit more content for review, but still leaves room for some to go undetected.

Help for the struggling tech giants

When I found tweets that I believed violated Twitter policies, I reported them. Most of them were quickly removed, even within an hour. But some obviously offensive posts took several days to arrive.

There are still some text-based tweets that haven’t been removed, despite clearly violating Twitter’s policies. That shows that the company’s content review process is not consistent.