contador javascript Saltar al contenido
Contact : alejandrasalcedo0288@gmail.com

Twitter updates its rules of manipulated content (and this you should know)

  • With these measures of synthetic or modified content, Twitter also hopes to put more contextual information to posts

  • Those publications that are identified as risky by the social network, will send alerts when they like or retweet

  • In this way, it is expected that you can reduce your visibility and limit how many recommendations you receive from the public

In general terms, the content can have a very positive effect for companies and groups. It allows organizations to create a more dynamic interaction with their community in social networks. It can also help generate a closer link between the company and its same staff of collaborators. In addition, they are a business opportunity valuable. In good part, because it allows to place Advertising messages Y sell products to the people.

However, these same characteristics that make the content a striking tool give it certain disadvantages in the hands of malicious agents. According The Conversation, fake news can be used to fool the population. Research They point out that negative resources could have a disastrous effect on the emotional development of young people. According to New York Times, could contribute to the radicalization of the public.

Thus, countless regulators, civil groups and companies have tried to fight a battle against negative, false and radical content. At the head of this war, by choice or not, are social networks. As the main channels of information dissemination, all the platforms in the environment try to see how to filter and eliminate materials that are seen as dangerous. And without removing the right to free expression. The most recent attempt comes from Twitter.

Additional measures for manipulated content

With a statement, the social network of microblogging announced a series of measures to reduce the impact of certain resources within its platform. Twitter says that, from now on, users are prohibited from sharing synthetic or manipulated content that may cause damage. To do this, the company may label publications that present altered resources to present a different view. This, in an effort to reduce the incidence of deception.


Related Notes


Twitter shared that there will be three criteria to consider when cataloging potentially misleading content. First, assess whether it is manipulated or synthetic. Then, it will be analyzed if the author tried to share it with a malicious intention or with the aim of deceiving. Finally, it will be decided whether the effects on the population could result in serious damage or if they are a danger to public safety. Depending on the case, it may be labeled or removed.

Sufficient measure to prevent incidents?

The manipulated, synthetic and false content has previously produced not very pleasant results. The deepfakes they were widely used to discredit certain people, something that not even Facebook He could ignore. With the coronavirus, disinformation and fake news are causing the appearance of conspiracy theories. A few months ago an incident occurred in LinkedIn, where these resources were used for espionage tasks.

Initially, Twitter policies seem appropriate. With these measures, a more or less clear parameter of what represents (or not) a risk content is established. It also helps people not only stop seeing these resources, but also gives them the tools to know exactly why they are a risk. On the other hand, it retains the freedom of expression of the most radical users, only by erasing ideas that are clearly harmful to the market.

On the other hand, this false content policy has the same problem as the rest of this type of ideas: it does not act until after the fact. Of course, it is very difficult for Twitter (or another social network, for that matter) to have a system that detects and stops the posts who violate their policies before they reach the public. But with these measures, people can still be exposed, even if only for a short time, for the purposes of these publications.

As long as there is no other solution, these initiatives are enough. But it is urgent to find a more preventive system for content moderation.