by Natasha Singer
A police officer on the last shift in an Ohio city recently received an unusual call from Facebook.
Earlier that day, a local woman wrote a Facebook post that said she was walking home and intended to commit suicide when she got there, according to a police report on the case. Facebook called to warn the Police Department of the suicide threat.
The officer who took the call quickly located the woman, but she denied having suicidal thoughts, according to the police report. Still, the officer believed he could hurt himself and told the woman that she should go to the hospital, either voluntarily or in police custody. Eventually, he took her to a hospital for a mental health evaluation, an evaluation prompted by Facebook’s intervention. (The New York Times withheld some details of the case for privacy reasons.)
Police stations from Massachusetts to Mumbai have received similar alerts from Facebook for the past 18 months as part of what is likely the world’s largest suicide threat detection and alert program. The social network increased the effort after several people broadcast their suicides live through Facebook Live in early 2017. It now uses algorithms and user reports to detect possible suicide threats.
Facebook’s rise as a global arbiter of mental anguish puts the social network in a difficult position at a time when it is being investigated for privacy breaches by regulators in the United States, Canada and the European Union, as well as facing a increased scrutiny for failing to respond quickly to election interference and ethnic hate campaigns on your site. Although Facebook CEO Mark Zuckerberg apologized for the incorrect collection of user data, the company last month dealt with new revelations about special data-sharing agreements with tech companies.
The anti-suicide campaign gives Facebook a chance to frame its work as good news. Suicide is the second leading cause of death among 15-29 year olds worldwide, according to the World Health Organization. Some mental health experts and police officers said that Facebook had helped officers locate and detain people who were clearly about to harm themselves.
Facebook has computer algorithms that analyze user posts, comments, and videos in the United States and other countries for signs of immediate risk of suicide. When a post is flagged by technology or a user in question, it goes to the company’s human reviewers, who are empowered to call local authorities.
“In the last year, we have helped first responders quickly reach some 3,500 people around the world who needed help,” Zuckerberg wrote in a November post about the efforts.
But other mental health experts said Facebook calls to police could also cause harm, such as inadvertently precipitating suicide, forcing non-suicidal people to undergo psychiatric evaluations, or leading to arrests or shootings.
And, they said, it is unclear whether the company’s approach is accurate, effective or safe. Facebook said that, for privacy reasons, it was not tracking the results of his calls to the police. And it hasn’t revealed exactly how its reviewers decide whether to call emergency personnel. According to critics, Facebook has assumed the authority of a public health agency while protecting its process as if it were a corporate secret.
“It’s difficult to know what Facebook is really picking up on, what they’re acting on, and they’re responding appropriately to the appropriate risk,” said Dr. John Torous, director of the digital psychiatry division at Beth Israel Deaconess Medical Center in Boston. ” It’s black box medicine. “
Facebook said it worked with suicide prevention experts to develop a comprehensive program to quickly connect users in distress with their friends and send them contact information for help lines. The experts also helped train dedicated Facebook teams, who have experience in law enforcement and crisis response, to review the most urgent cases. Those reviewers contact emergency services only in a minority of cases, when users appear to be at imminent risk of serious self-harm, the company said.
“While our efforts are not perfect, we have decided to err on the side of providing people who need help with resources as soon as possible,” Emily Cain, a Facebook spokeswoman, said in a statement.
In a September post, Facebook described how it had developed a pattern recognition system to automatically rate certain posts and user comments regarding the likelihood of suicidal thoughts. The system automatically scales high-scoring posts, as well as posts submitted by interested users, to specially trained reviewers.
“Facebook has always been way ahead of the pack,” said John Draper, director of the National Suicide Prevention Line, “not only in preventing suicide, but also in taking an extra step toward innovation and committing to truly smart and progressive. ” . ”(Vibrant Emotional Health, the nonprofit group that runs Lifeline, has advised and received funding from Facebook).
Facebook said its suicide risk scoring system worked worldwide in English, Spanish, Portuguese and Arabic, except in the European Union, where data protection laws restrict the collection of personal data such as health information. There is no way to exclude yourself, unless your Facebook account is not published or deleted.
A review of four police reports, obtained by The Times under the Freedom of Information Act requests, suggests that Facebook’s approach has had mixed results. Except in the case of Ohio, the police departments redacted the names of the people flagged by Facebook.
In one case in May, a Facebook representative helped police officers in Rock Hill, South Carolina, locate a man who was broadcasting a suicide attempt on Facebook Live. In a recording of the call to the police station, the Facebook representative described the background of the video (trees, a street sign) to a police operator and provided the latitude and longitude of the man’s phone.
The Police Department credited Facebook with helping officers locate the man, who tried to flee and was taken to a hospital.
“Two people called the police that night, but they couldn’t tell us where he was,” said Courtney Davis, a telecommunications operator for the Rock Hill police, who responded to the call from Facebook. “Facebook could.”
The Mashpee, Massachusetts Police Department had a different experience. Just before 5:16 a.m. August 23, 2017, a Mashpee police officer received a call from a neighboring police department about a man who was broadcasting his suicide on Facebook Live. Officers arrived at the man’s home a few minutes later, but when they reached him, he no longer had a pulse, according to police records.
At 6:09 a.m., according to the report, a Facebook representative called to alert police to the suicide threat.
Scott W. Carline, chief of the Mashpee Police Department, declined to comment. But he said of Facebook: “I would like to see them improve the suicide prevention tools that they have to identify warning signs that could be fatal.”
The Cain of Facebook said that, in some cases, the help unfortunately did not arrive in time. “We really feel for those people and their loved ones when that happens,” he said.
The fourth case, in May 2017, involved a teenager in Macon, Georgia, who was broadcasting a suicide attempt. Facebook called police after officers had already found the teenager at her home, a spokeswoman for the Bibb County Sheriff’s Office said. The teenager survived the attempt.
Some health researchers are also trying to predict suicide risk, but are using a more transparent methodology and collecting evidence on the results.
The Department of Veterans Affairs has developed a suicide risk prediction program that uses AI to scan veterans’ medical records for certain medications and illnesses. If the system identifies a veteran as high risk, the VA offers mental health appointments and other services. Preliminary findings from a VA study reported fewer deaths overall among veterans in the program compared to non-participating veterans.
In an upcoming article in a Yale law journal, Mason Marks, a health law scholar, argues that Facebook’s suicide risk scoring software, along with his calls to the police that can lead to mandatory psychiatric evaluations , constitutes the practice of medicine. He says government agencies should regulate the program, which requires Facebook to produce evidence of safety and effectiveness.
“In this climate where trust in Facebook is really eroding, I am concerned that Facebook will just say ‘Trust us here,'” said Marks, a fellow at Yale Law School and the University School of Law. from New York.
Facebook’s Cain disagreed that the show was a health exam. “These are complex issues,” he said, “so we have been working closely with experts.”