Within its constant effort to maintain cordiality and prevent controversies, Instagram announced that as of December 16, 2019, use artificial intelligence (AI) to notify its users if the text they are writing next to their photos or videos contains language that could be offensive, to give them the opportunity to moderate their words before publishing them.
As part of our long-term commitment to lead the fight against online bullying we have developed and tested an AI system that can recognize different forms of bullying on Instagram () We have discovered that this type of intercession can encourage people to reconsider Your words, when given the opportunity, publish the social network.
This function, which aims to find a balance between freedom of expression and harmony, is available in a limited number of countries, and then expanded to the entire world. This warning helps educate people so that they know what we do not allow on Instagram and when an account could run the risk of breaking our rules, added the Facebook affiliate, also known for its strict censorship policies against body images human.
Although cyberbullying is not an exclusive Instagram issue, this platform has proved particularly problematic. A 2017 study found that 17 percent of teenagers are harassed on social networks and that most of that time that harassment occurs on Instagram. In October, the social network launched a function that allows a user to protect their account from unwanted interactions with aggressive people; If you restrict someone, stop seeing their comments.
Another measure that Instagram has put to the test is to hide the amount of likes that some content has received, with the aim of encouraging more expression and connection between people, instead of competition and the search for popularity.