Help in investigating how Facebook’s censorship policies actually work.
by Julia Angwin, Ariana Tobin, and Madeleine Varner, ProPublica
Earlier this month, in the wake of the Charlottesville attack on protesters, a post began circulating on Facebook titled: “Heather Heyer, Woman Killed in Road Rage Incident was a Fat, Childless 32-Year-Old Slut.”
You might have thought that the post violated Facebook’s rules against hate speech. But, in fact, it did not. Facebook’s arcane hate speech rules, revealed by ProPublica in June, only prohibit hate speech attacks against “protected categories” of people — based on gender, race or religious affiliation — but not against individuals.
We’re launching an effort to learn more about Facebook’s secret censorship rules. (Details of how you can help are below.) These rules, which it distributes to thousands of human censors it employs across the world — draw elaborate distinctions between hate speech and legitimate political expression. One Facebook training slide published by ProPublica was particularly surprising to many of our readers. It asked which group was protected against hate speech: female drivers, black children or white men. The correct answer was noted as being “white men.”
The reason: Facebook doesn’t protect what it calls “subsets” of its protected categories. Black children and female drivers were both considered subsets under Facebook’s rules, while white men were protected based on race and gender.
After ProPublica’s article was published, Facebook changed its rules to add age as a protected category, according to a person familiar with the decision. That adjustment means that Facebook will delete slurs against black children, because both race and age are now protected groups.
The “protected categories” rule doesn’t always prevail. Facebook eventually deleted the post about Heather Heyer for reasons unrelated to hate speech, according to a person familiar with the situation. It was expunged because it was published by The Daily Stormer — an organization that is on Facebook’s secret list of hate groups that are not allowed to publish on the platform, and because it violated Facebook’s anti-bullying rules.
As Facebook’s rules evolve and expand based on their own secret logic, Facebook users are understandably confused. Many have sent us examples of hate speech that they suspected had slipped through the censors’ filters, or speech that they felt was wrongly designated as hateful. Facebook’s decisions on these posts sometimes appeared to contradict its internal guidelines.
This raised a new set of questions for us: Are Facebook censors following the site’s rules consistently? And, even more importantly, despite all its efforts, is Facebook succeeding in ridding its site of hate speech?
Only Facebook users can help us answer these questions.
So we’re hoping you can join us in investigating Facebook’s handling of hate speech. To make it easy to participate, we built a Facebook bot — which is a tiny computer program that automatically converses with you over Facebook Messenger.
To use the bot, all you have to do is send a Facebook message to ProPublica’s page and click the “Get Started” button. Our bot will ask you questions about your experience with hate speech.
If you or someone you know reported a hateful post, or had one flagged as hateful, we’d like to hear about it, regardless of the outcome. Screenshots, links and exact wording are especially helpful, but please share whatever you have.
We know some people who violate Facebook’s community standards are blocked from the platform. If you’d rather not use Facebook for any reason, we’ve also put the questions into a survey.
We may publish the information that you share with us, but we will redact any individual identifying information unless we contact you and get your permission. The more people who contribute their experiences, the more accurate a picture we’ll be able to get.
Photo Credit: Shutterstock