More than a year after a YouTube child safety content moderation scandal, it takes just a few clicks for the platform’s recommendation algorithms to redirect searches for “bikini-clad” videos of adult women to clips of Minors in scantily clad gymnastics that contort the body. or take an ice “challenge” or popsicle.
A YouTube creator named Matt Watson pointed out the problem in a critical Reddit post, saying he found dozens of videos of children in which YouTube users are exchanging inappropriate comments and timestamps below the fold, denouncing the company for not having avoided what he describes as a “soft core” Pedophile Ring “operating with the naked eye on his platform.
He has also posted a YouTube video demonstrating how the platform’s recommendation algorithm pushes users into what he calls a “wormhole” of pedophilia, accusing the company of facilitating and monetizing the sexual exploitation of children.
We were easily able to replicate the behavior of the YouTube algorithm that Watson describes in a private browser session with a history that, after clicking on two videos of bikini-clad adult women, suggested that we watch a video called “Sweet Sixteen Pool Party.”
Clicking on YouTube’s sidebar brings up various prepubescent girl videos in its “until next time” section, where the algorithm includes related content to encourage users to keep clicking.
The videos they recommended to us in this sidebar included thumbnails showing young girls demonstrating gymnastic poses, showing off their “morning routines,” or licking popsicles or popsicles.
Watson said it was easy for him to find videos containing inappropriate / predatory comments, including sexually suggestive emoji and timestamps that appeared with the intention of highlighting, shortcutting and sharing the most engaging positions and / or moments in the videos of minors.
We also found multiple examples of inappropriate timestamps and comments on children’s videos that the YouTube algorithm recommended that we view.
Some comments from other YouTube users denounced those who made sexually suggestive comments about children in the videos.
In November 2017, several major advertisers froze spending on the YouTube platform after an investigation by the BBC and the Times discovered similarly obscene comments on children’s videos.
Earlier in the same month, YouTube was also criticized for low-quality content targeting children as viewers on its platform.
The company announced a series of policy changes related to child-focused videos, including the fact that police comments about children’s videos and inappropriate videos about children in them would have to be disabled. completely.
Some of the young girl videos YouTube recommended we view already had comments disabled, suggesting that their AI previously identified a large number of inappropriate comments being shared (due to their policy of disabling comments on clips containing children when there are comments). considered “inappropriate”) – however, the videos themselves were still suggested for viewing in a test search that originated with the phrase “bikini ride”.
Watson also says that he found advertisements shown in some children’s videos that contain inappropriate comments, and claims that he found links to child pornography that are also shared in YouTube comments.
We were unable to verify those findings in our brief tests.
We asked YouTube why its algorithms lean towards recommending videos of minors, even when the viewer starts watching videos of adult women, and why inappropriate comments are still a problem in videos of minors more than a year ago. year after the same topic was highlighted through investigative journalism.
The company sent us the following statement in response to our questions:
Any content, including comments, that endangers minors is abominable and we have clear policies against it on YouTube. We apply these policies aggressively, reporting them to the relevant authorities, removing them from our platform, and canceling accounts. We continue to invest heavily in technology, equipment, and partnerships with charities to address this issue. We have strict policies governing where we allow ads to appear and we enforce them vigorously. When we find content that violates our policies, we immediately stop serving ads or remove it entirely.
A YouTube spokesperson also told us that it is reviewing its policies in light of what Watson has highlighted, adding that it is in the process of reviewing the specific videos and comments featured in its video, also specifying that some of the content has been removed. as a result of the review
Although the spokesperson emphasized that most of the videos marked by Watson are innocent recordings of children doing everyday things. (Although, of course, the problem is that innocent content is being repurposed and time-splitting for abusive gratification and exploitation.)
The spokesperson added that YouTube works with the National Center for Missing & Exploited Children to report accounts found making inappropriate comments about children to law enforcement authorities.
In a broader discussion on the subject, the spokesperson told us that determining the context remains a challenge for their AI moderation systems.
On the human moderation front, he said the platform now has around 10,000 human reviewers tasked with evaluating content flagged for review.
The volume of video content uploaded to YouTube is around 400 hours per minute, he added.
There is still clearly a massive asymmetry around content moderation on user-generated content platforms, with inadequate AI to fill the gap due to continued weakness in understanding context, even when human moderation teams of the platforms remain hopelessly under-resourced and under-resourced, compared to the scale of the task.
Another key point YouTube failed to mention is the clear tension between ad-based business models that monetize content based on viewer engagement (such as your own) and content security issues that must carefully consider substance. of the content and the context in which it is found. been consumed in.
It is certainly not the first time that YouTube’s recommendation algorithms have been used for negative impacts. In recent years, the platform has been accused of automating radicalization by pushing viewers towards extremist and even terrorist content, prompting YouTube to announce another policy change in 2017 related to the way it handles content created by extremists. known.
The broader social impact of algorithmic suggestions that inflate conspiracy theories and / or promote anti-factual scientific or health content has also been repeatedly raised as a concern, including on YouTube.
And just last month, YouTube said it would reduce recommendations for what it called “borderline content” and content that “could misinform users in harmful ways,” citing examples like videos promoting a bogus miracle cure for a serious illness. , or claiming that the Earth is flat, or making “blatantly false claims” about historical events like the 9/11 terrorist attack in New York.
“While this change will apply to less than one percent of content on YouTube, we believe that limiting the recommendation of these types of videos will mean a better experience for the YouTube community,” he wrote at the time. “As always, people can continue to access all videos that meet our Community Guidelines, and where relevant, these videos can appear in channel subscriber recommendations and search results. We believe that this change strikes a balance between maintaining a platform for freedom of expression and meeting our responsibility to users. “
YouTube said changing algorithmic recommendations on conspiracy videos would be gradual, only initially affecting recommendations on a small group of videos in the United States.
He also noted that implementing the tweak to his recommendation engine would involve machine learning technology experts and human evaluators and experts who would help train the artificial intelligence systems.
“Over time, as our systems become more accurate, we will expand this change to more countries.” It is just one more step in an ongoing process, but it reflects our commitment and sense of responsibility to improve the YouTube recommendation experience. ” added.
It remains to be seen whether YouTube will broaden that policy change and decide that it should exercise greater responsibility for the way its platform recommends and presents videos of children for remote consumption in the future.
Political pressure can be a motivating force, with a push for regulation of online platforms, including calls for internet companies to face clear legal responsibilities and even a legal obligation towards users regarding the content they distribute and monetize. .
For example, UK regulators have made social media and internet security legislation a political priority, and the government is due to publish a White Paper setting out its plans for the dominant platforms this winter.