The recent policing of posts in Facebook, resulting with some being removed whilst the users were temporarily suspended from using Facebook has raised a few questions.
People rightly wonder how policing of Facebook is conducted. Is it done by an algorithm or does it require human intervention? If human intervention is required, is this being done by Facebook employees or does Facebook rely on its customers to report posts that violate the Facebook Community Standards (hereafter referred to as ‘FCS’).
My gut feeling is that posts which violate the FCS will remain on Facebook untouched until a user reports them for review. With over one billion Facebook accounts in the system, in many different languages, there are surely plenty that in one way or the other violate the FCS.
If you are in doubt, consider this; There are over one billion Facebook accounts. Therefore, if just 1% all Facebook users have posted a single offensive post, that’s over 10 million potentially offensive posts!
Here come the thought police
To test my hypothesis that Facebook primarily relies on its users to report posts that violate the FCS, I had a look at the reporting options that are available, with the help of a kitten.
The images that follow will show you the various options that are available to Facebook users who wish to report posts that they feel violate the FCS.
(The exact options for reporting will vary depending on the type of Facebook post, as well as depending on which Facebook App is being used. This example is based on a laptop using the Google Chrome browser. The smartphone and tablet Facebook App menu options will vary.)
I am offended, and I will report you
The first step is to open the upper right hand drop down menu.
This gives you three choices. We will look at each, starting with the annoying one.
The annoyed user is given 4 options. As you see, none of these options will result with the post being removed, only blocking, unfollowing, unfriending or discussing;
The spam option has a similar result, much like the ‘Junk’ email filter in most email services. It does not offer an option to have Facebook review the post for FCS violations.
Here comes the removal service
If the user chooses the ‘should not be on Facebook’ option, things get more serious.
The resulting 5 categories all relate to violations of the FCS;
Even though these are all violations of the FCS, removal of the post and suspension of the user account still depends on the reporting user’s subsequent choices.
Rude, vulgar or bad language is ok
If the user has chosen Rude, vulgar or bad language, the post will not be reviewed or removed, as seen by the related 4 choices;
The Sexually Explicit option could result with the post being removed if the reporting user decides on the first option, to have Facebook review the post, as seen below;
The post will, however, remain until the Facebook Team conducts its review. It will only be removed if they agree that the FCS has been violated.
Choosing “harassment or hate speech” results with the user being asked for more specifics;
Once that clarification is made, the user is given 4 choices, only one that could result with the post being removed;
Again, the post stays on Facebook unless the Facebook Team agrees that it is hateful.
Threats of violence
Like hate speech, Facebook also wants more specifics for allegations of threatening, violent of suicidal posts;
If ‘Credible threat of violence’ is chosen, the user again is given the 4 familiar options, with only one that could result with the post being removed;
By now you know, the post remains pending the outcome of the review.
The ‘Something else’ option leads to potentially criminal or otherwise offensive and unlawful activities;
The buying and selling of guns, some forms of drugs and adult products is surely not illegal in all countries, however this is prohibited by the FCS.
Unauthorised use of intellectual property, depending on applicable copyright law and the circumstances of the case could involve bona fide criminal acts.
As such, the user can choose to have Facebook review the post for possible removal.
The bottom line
So far all the references that I have been able to find regarding Facebook taking action on users’ posts indicate that Facebook ‘teams’ review all reports received from users and then decide on what action to take.
Nothing to support the idea that an automated Facebook algorithm is seeking, identifying and removing offensive posts has been found.
On this basis, until proven otherwise, it appears that Facebook is only removing posts after receiving reports from users that have cited one of the following infractions;
It’s sexually explicit
It’s harassment or hate speech towards;
- A race or ethnicity
- A religious group
- A gender or orientation
- People with disability or disease
- An individual
It’s Threatening, violent or suicidal (involving)
- Credible threat of violence
- Self-injury or suicide
- Graphic violence
- Theft or vandalism
- Drug use
- It describes buying or selling drugs, guns or adult products
- I think it is an unauthorised use of my intellectual property
Once the Facebook team has removed a post, few details explaining why they did so are provided, only a brief standard message;
Facebook users are not given any clues whatsoever regarding what exactly they have done wrong, nor are they informed of who lodged the report against them.
To be fair, not all reports result with posts being removed (Facebook has rejected some reports that I have lodged), and Facebook has restored some posts that were removed in error.
The appeal process
There is no appeal process for posts that have been removed, only for accounts that have been disabled.
In Facebook’s own words;
Knowing that there is an appeal process available, users that have had their accounts disabled could seek clarifications regarding the removal of their posts when making their appeals.
Whether Facebook will provide such clarifications is anyone’s guess.
Thomas Timlen is a freelance writer, blogger, photographer, graphic artist and researcher based in Singapore. He blogs at The Inside Story.