This initiative against publications that dehumanized others began in 2018
Twitter then only considered religious groups within its measures
According to the social network, we will continue working with external experts for future containment measures.
One of the biggest problems social media faces is toxic content. For example, YouTube got into huge controversy in 2019 for allegedly asking his collaborators not to stop videos with fake news or to cause discord. Facebook has long struggled against misleading ads, especially the politicians. Even Twitter and Snapchat seek to counteract the possible effect that their platforms have on Health mental.
Of course, this is not a new phenomenon. In accordance with ForbesThis phenomenon is caused by the fact that Twitter, Facebook, YouTube and company are, first of all, an advertising market. According Fast Company, people with enough diversity of thought and knowledge have not been integrated to repair this problem. And in data from Better MarketingThere are also steps that people themselves can take to mitigate the impact.
It should be clarified that the challenge is not exclusive to Twitter, Facebook, YouTube and the rest of the platforms in this market. The internet as a whole allows people to express their views with relative anonymity and great impunity, which removes some of the impediments to insults and toxic comments. But it is these platforms that have the greatest penetration and reach. In this sense, they must also find better ways to contain the phenomenon.
Twitter imposes new bans on toxic language
In this context, the main microblogging social network has just revealed new measures to the way it monitors offensive content on its platform. According The Verge, Twitter is going to start deleting posts who dehumanize the users of the platform due to their religious beliefs, age group, different abilities or illness. This initiative is part of their fight to stop any kind of message that incites hatred among the communities on the site.
With this change, Twitter aims to eliminate all posts that treat individuals as less than a human, with respect to any of the four categories mentioned. According to the company, all posts that meet this criteria will be removed. Those accounts that have made these comments before they become effective may continue to be active. Likewise, he pointed out that more vulnerable groups will be included in the future.
The difficult battle between hate and freedom
This is not the first time that a company in this sector has made a similar change. For 2018, it was said that hate content within YouTube was one of the biggest challenges the company must face. In 2019, WhatsApp became involved in a small scandal when it was discovered that, in Germany, it was the favorite platform for certain neo-nazi groups. And like Twitter, Facebook prohibits certain posts (even if not always with good results).
In regards to this specific initiative, the microblogging network is on a very good path. Virtually all reasonable people agree that, no matter the context, hate content on social media is not acceptable. The fact that Twitter systematically eliminates these publications, which only cause the gap between different segments of the population to widen. So, in this sense, it is even a policy that was slow to come out.
On the other hand, the company will have to keep moving forward very carefully in the future. Not all toxic content control policies are so black and white. There are many cases in which the freedom of expression of people can be stepped on by unilaterally deleting their publications. So far, Twitter has done very well in deciding what issues to address and how to do it. But perhaps you should walk a little more carefully in the future.