Facebook Automated System Scans for Suicidal Posts, Offers Help

John Lister's picture

Facebook is to use artificial intelligence to spot posts made by people who might be suicidal. However, it will continue using human moderators to decide how to act over such posts.

The site already has a tool that moderators can activate to display special messages to people whose wellbeing may be in question. These messages include details of local professional support services and help lines.

The messages also encourage the user to talk over their problems with a friend and even include suggested wordings for how to ask for help. Facebook says this tool was developed with the help of several specialist organizations that deal with mental health and suicide prevention. (Source: fb.com)

System Will Find Concerning Posts

Until now Facebook only issued such messages when somebody - usually a friend - had actively filed a report about worrying posts by the person in question. Now the plan is to try to find and act on such posts before somebody makes such a report.

This automated "proactive detection" will cover both written posts and live streams. As well as looking at the content of the posts themselves, the system will also look at comments made by the user's friends in response to the video, for example where they express concern or ask if the person needs help. When the system detects a user who might be at risk, it will flag up the post in the same way as if a human had reported the post.

Biggest Risks Flagged Up

Facebook is also using software tools to improve the way staff can deal with reports. One change includes using an automated system to try to identify the posts where there seems to be the highest risk of the user imminently harming themselves. The human staff can then assess and deal with these posts rather than it being a "first come, first served" system. (Source; washingtonpost.com)

Other automated changes including flagging up the specific point in a video that has prompted concern by a user's friends. This is possible because Facebook makes a note of how far into viewing a video a viewer is when they post a comment in reply. The idea is that Facebook staff can immediately review the most concerning section of the video rather than watch it all the way through.

What's Your Opinion?

Are these useful changes? Is it right for Facebook to look out for users who may be expressing a desire to harm themselves? Is the balance between automation and human review appropriate?

Rate this article: 
Average: 5 (3 votes)

Comments

Dennis Faas's picture

It's great Facebook is able to automate something like this but at the same time, it's pretty sad that people are so addicted to social media these days (such as Facebook and Twitter) that something like this is even needed.