Content warning: this post discusses suicide, depression, traumatic events, and Facebook’s unwillingness to protect their users from credible threats.

Facebook is Watching You-- For Your Own Good

Hello folks! I’m guessing that by now most of you have heard about Facebook’s new “proactive suicide detection” AI. Reactions have ranged from relief to disbelief to outright horror, but Facebook has already made it abundantly clear how little it cares about users’ privacy. It will not be possible to opt out of monitoring while continuing to use the platform.

Facebook says it trained the AI by finding patterns in the words and imagery used in posts, videos, and live streams that have been manually reported as a suicide risk in the past. It also looks for comments like “Are you OK?” and “Do you need help?” The AI will scan all posts for patterns that seem to indicate suicidal thoughts and forward “worrisome” posts to Facebook’s human moderators. “When necessary” the program will send mental health resources to the user or their friends, or contact local first responders.

Sounds fantastic, right? Well, at least if you don’t mind a Facebook algorithm and a team of dubiously qualified human moderators snooping through your most personal posts– regardless of your privacy settings. Aside from the fact that this is a massive violation of users’ privacy, here’s why this might not be such a great idea.