Content warning: this post discusses suicide, depression, traumatic events, and Facebook’s unwillingness to protect their users from credible threats.

Facebook is Watching You-- For Your Own Good

Hello folks! I’m guessing that by now most of you have heard about Facebook’s new “proactive suicide detection” AI. Reactions have ranged from relief to disbelief to outright horror, but Facebook has already made it abundantly clear how little it cares about users’ privacy. It will not be possible to opt out of monitoring while continuing to use the platform.

Facebook says it trained the AI by finding patterns in the words and imagery used in posts, videos, and live streams that have been manually reported as a suicide risk in the past. It also looks for comments like “Are you OK?” and “Do you need help?” The AI will scan all posts for patterns that seem to indicate suicidal thoughts and forward “worrisome” posts to Facebook’s human moderators. “When necessary” the program will send mental health resources to the user or their friends, or contact local first responders.

Sounds fantastic, right? Well, at least if you don’t mind a Facebook algorithm and a team of dubiously qualified human moderators snooping through your most personal posts– regardless of your privacy settings. Aside from the fact that this is a massive violation of users’ privacy, here’s why this might not be such a great idea.

Suicide By Cop

Sending first responders to “help” a suicidal person can be less than beneficial. For example, let’s look at the case of West Virginia police officer Stephen Mader.

Officer Mader responded to a 911 call in which a man’s girlfriend said that the man was threatening to kill himself. Mader made contact with the suicidal person, who was holding a firearm, and used his training to de-escalate the situation. That’s when two more officers arrived and shot the suicidal man dead. Officer Mader found himself jobless soon after for allegedly putting his fellow officers in danger– despite the fact that the man’s weapon wasn’t loaded.

The phrase “suicide by cop” is startlingly common in reports of police shootings. In many parts of the US, the police have an extremely adversarial relationship with the communities they patrol, and sending them to check on a desperately depressed person can be the mental health equivalent of throwing gasoline on a house fire.

Do You Trust Facebook?

Amid all the cheering from mental health professionals and people who have lost loved ones to suicide, it’s important to remember that this is the same company whose support team regularly ignores graphic threats of sexual assault, torture, and death directed at their users– even when those reports are backed by screenshots– and has a disturbing record of suspending victims’ accounts when they speak out about the threats they’ve received.

Australian writer Clementine Ford is one of the most high-profile people to be targeted by Facebook’s backwards enforcement of its community standards, but her treatment is far from an isolated incident. A disturbing number of women, LGBTQA+ people, and people of color have found themselves slapped with lengthy bans simply for speaking out about harassment they’ve received.

In fact, if you’ve spent any amount of time on the platform in the last few years, you’ve probably noticed that Facebook’s detection of violations and enforcement of their community standards is bizarrely hit or miss. Graphic public threats of violence toward marginalized groups usually aren’t found to violate community standards, but people can be banned for something as innocuous as complaining about period pain.

My own personal brush with Facebook’s enforcement bots was more annoying than enraging. For reasons that were never stated, I was slapped with a 7-day ban from joining, commenting in, or posting to groups. I had joined a few new groups, posted in several to introduce myself, and been fairly active in a reading club. I posted an art piece to another group, and BAM– the second I clicked the ‘post’ button I was locked out of my account.

When I regained access, I was informed of the ban. Facebook’s wonderful support team never responded to any of my messages disputing it and asking for clarification of what, precisely, I had done to get banned. (For those who are wondering, I had not posted more than once in any group, none of my posts were in the least bit uncivil, racy, or contentious, and I wasn’t promoting or selling anything.)

My experience was hardly something to contact the ACLU about, but it demonstrates that Facebook’s algorithms are anything but reliable. And what’s more, Facebook’s support team doesn’t seem to care. Are these really the kind of people you want monitoring your and your loved ones’ mental health?

A Deeply Personal Subject

People can be depressed for a wide variety of reasons. A bad breakup. Being rejected by one’s family. The death of a loved one. Illness. Poverty. Hunger. Harassment and bullying. Domestic violence. Sexual assault. Military service. These are but a few of many traumatic experiences and situations that can cause feelings of depression and suicidal thoughts.

As a platform where people go to connect with others and find comfort in communities of people with similar experiences, people often share their pain and frustration on Facebook. When things get really bad, it’s nice to be able to reach out and know that others like you are there.

The knowledge that every post is now being monitored for indications of suicidal intent will only serve to further isolate people who are already alone and marginalized. In a world where people can count on being mocked and ostracized for publicly expressing their feelings, platforms like Facebook are, for some, the only place they can safely vent their pain and frustration. It may also be their only means of keeping in touch with support groups and friends.

Is it really a good thing to take away the only outlet these people have?

What’s more, like it or not, suicide is a basic human right. It’s our right to control our own bodies, including the choice to say “I’m tired of hurting. I want the pain to stop.” I don’t think it’s anyone’s business to interfere with that, and most of all not Facebook’s, a company which can’t even be bothered to remove users who abuse the platform to send graphic death threats.

Facebook Isn’t One of the Good Guys

Considering the company’s failure to protect their users from abuse and harassment, I find Zuckerberg’s sudden desire to protect users from themselves rather hollow. Is a platform that regularly finds that graphic rape threats don’t violate their community standards really that concerned that its users might harm themselves? Or are they more concerned about the negative publicity from incidents like recent Facebook Live suicides?

Like it or not, I suspect this new feature is less about helping people and more about damage control. Facebook, the ultimate data mining, social engineering, and social media juggernaut, doesn’t really care if you’re suffering to the point of considering suicide. They just don’t want your blood on their lawn.

What are your thoughts on Facebook’s new “proactive detection” artificial intelligence technology? Feel free to leave a comment, drop by my Facebook author page, or hit me up on Twitter @Leland_Lydecker.

Liked this? Please consider buying one of my books or supporting me on Patreon.
Become a patron at Patreon!

2 thoughts on “Facebook Is Watching You… For Your Own Good

  1. Pingback: Fun With Spammers

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes:

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>