Facebook Introduces Artificial Intelligence to Detect Suicidal Posts & Save Lives

It would change the game for cases like that of 12-year-old Katelyn Nicole Davis, who streamed her own suicide on Facebook live earlier in 2017. The young girl reportedly hung herself from a tree in her front yard, while the live stream of her lifeless body captured the 20 minutes that followed.

If you’ve been on Facebook for any amount of time, you know that there are a lot of ways the social media platform relies on its users to make the most of their experience. For example, if one of my Facebook friends posts something inappropriate, I can “flag” it for the big guys at Facebook to review and possibly take down if they see fit.

In an effort to further support and protect their widespread virtual community, Facebook is rolling out a new software across their platforms that they hope will save lives.

“Proactive Detection” is an artificial intelligence technology that will scan all Facebook posts for patterns of suicidal thoughts, or life-threatening comments. When necessary, the platform will send mental health resources to at-risk users or their friends, and may even contact local first responders.

The new technology is designed to enhance user safety, decrease Facebook’s response time and send help sooner. The AI is able to flag concerning posts and alert prevention-trained human moderators rather than waiting for user reports.

It would change the game for cases like that of 12-year-old Katelyn Nicole Davis, who streamed her own suicide on Facebook live earlier in 2017. The young girl reportedly hung herself from a tree in her front yard, while the live stream of her lifeless body captured the 20 minutes that followed.

What’s worse is that the video remained active on Facebook for more than 24 hours after Davis was pronounced dead. Family members and authorities had no way of removing the video. Only Facebook could do so, and relied solely on its users to flag the post.

In addition to implementing the new AI, Facebook has teamed up with more than 80 partners like Save.org, the National Suicide Prevention Lifeline and Forefront to be able to resource at-risk users with life-saving tools and support.

The technology has been in testing for more than a month now and will likely be put into effect across the platform in 2018. And the social media giant doesn’t plan to stop at just preventing suicide. According to Facebook CEO Mark Zuckerberg, Proactive Detection is the future in making social media a safer community.

“In the future, AI will be able to understand more of the subtle nuances of language, and will be able to identify different issues beyond suicide as well, including quickly spotting more kinds of bullying and hate.”

With more than 2 BILLION users on the platform, it’s incredible to see Facebook taking initiative and responsibility for the safety of their virtual community.

Bri Lamm
Bri is an outgoing introvert with a heart that beats for adventure. She lives to serve the Lord, experience the world, and eat macaroni and cheese in between capturing life’s greatest moments on one of her favorite cameras.

Get stories that matter straight in your inbox!

Your privacy matters to us.

Comments