Maria Montero

Suicide Watch on Facebook Raises Problems

Courtney Davis, a telecommunications operator with the Rock Hill Police Department, SC, and Rock Hill Police Sgt. Bruce Haire, in Rock Hill, SC, December 17, 2018 (Logan R. Cyrus / The New York Times)

by Natasha Singer

A police officer on the last shift in an Ohio city recently received an unusual call from Facebook.

Earlier that day, a local woman wrote a Facebook post that said she was walking home and intended to commit suicide when she got there, according to a police report on the case. Facebook called to warn the Police Department of the suicide threat.

The officer who took the call quickly located the woman, but she denied having suicidal thoughts, according to the police report. Still, the officer believed he could hurt himself and told the woman that she should go to the hospital, either voluntarily or in police custody. Eventually, he took her to a hospital for a mental health evaluation, an evaluation prompted by Facebook’s intervention. (The New York Times withheld some details of the case for privacy reasons.)

Police stations from Massachusetts to Mumbai have received similar alerts from Facebook for the past 18 months as part of what is likely the world’s largest suicide threat detection and alert program. The social network increased the effort after several people broadcast their suicides live through Facebook Live in early 2017. It now uses algorithms and user reports to detect possible suicide threats.

Facebook’s rise as a global arbiter of mental anguish puts the social network in a difficult position at a time when it is being investigated for privacy breaches by regulators in the United States, Canada and the European Union, as well as facing a increased scrutiny for failing to respond quickly to election interference and ethnic hate campaigns on your site. Although Facebook CEO Mark Zuckerberg apologized for the incorrect collection of user data, the company last month dealt with new revelations about special data-sharing agreements with tech companies.

The anti-suicide campaign gives Facebook a chance to frame its work as good news. Suicide is the second leading cause of death among 15-29 year olds worldwide, according to the World Health Organization. Some mental health experts and police officers said that Facebook had helped officers locate and detain people who were clearly about to harm themselves.

Facebook has computer algorithms that analyze user posts, comments, and videos in the United States and other countries for signs of immediate risk of suicide. When a post is flagged by technology or a user in question, it goes to the company’s human reviewers, who are empowered to call local authorities.

“In the last year, we have helped first responders quickly reach some 3,500 people around the world who needed help,” Zuckerberg wrote in a November post about the efforts.

But other mental health experts said Facebook calls to police could also cause harm, such as inadvertently precipitating suicide, forcing non-suicidal people to undergo psychiatric evaluations, or leading to arrests or shootings.

And, they said, it is unclear whether the company’s approach is accurate, effective or safe. Facebook said that, for privacy reasons, it was not tracking the results of his calls to the police. And it hasn’t revealed exactly how its reviewers decide whether to call emergency personnel. According to critics, Facebook has assumed the authority of a public health agency while protecting its process as if it were a corporate secret.

“It’s difficult to know what Facebook is really picking up on, what they’re acting on, and they’re responding appropriately to the appropriate risk,” said Dr. John Torous, director of the digital psychiatry division at Beth Israel Deaconess Medical Center in Boston. ” It’s black box medicine. “

Facebook said it worked with suicide prevention experts to develop a comprehensive program to quickly connect users in distress with their friends and send them contact information for help lines. The experts also helped train dedicated Facebook teams, who have experience in law enforcement and crisis response, to review the most urgent cases. Those reviewers contact emergency services only in a minority of cases, when users appear to be at imminent risk of serious self-harm, the company said.

“While our efforts are not perfect, we have decided to err on the side of providing people who need help with resources as soon as possible,” Emily Cain, a Facebook spokeswoman, said in a statement.

In a September post, Facebook described how it had developed a pattern recognition system to automatically rate certain posts and user comments regarding the likelihood of suicidal thoughts. The system automatically scales high-scoring posts, as well as posts submitted by interested users, to specially trained reviewers.