Facebook recently published guidelines that showed increased enforcement across six main areas of banned content - graphic violence, adult nudity and sexual activity, terrorist propaganda, hate speech, spam and fake accounts.
Though this amount of transparency from Facebook is hugely acknowledged, it goes on to show the sheer extent of misinformation, fake accounts, and abusive content, the company is now dealing with.
By far the most prevalent of the offending categories was spam and fake accounts, and in the first quarter of this year alone Facebook apparently removed 837 million icees of spam and 583 million fake Facebook accounts.
Meanwhile, Facebook's rate of squashing fake accounts is actually decreasing.
The number of posts on Facebook showing graphic violence rose in the first quarter of this year from the preceding three months, possibly driven by the war in Syria, the company said on Tuesday. A Washington Post report earlier this month found that the company's facial-recognition tool, which the company says could help spot impostor accounts, reviews only a small fraction of the site's roughly 2 billion monthly active users.
The company estimated that for every 10,000 pieces of content seen on Facebook overall, between seven and nine of them violated its adult nudity and pornography standards.
Facebook also took down 837 million pieces of spam in Q1, nearly all of which were identified and flagged before anyone reported them.
"For hate speech, our technology still doesn't work that well and so it needs to be checked by our review teams", said Guy Rosen, the company's vice-president of product management, in a statement posted online announcing the release of the report. It took down 2.5 million pieces of hate speech during the period, only 38% of which was flagged by its algorithms. Providing users with a look into just how much work goes into keeping the social media platform free from those looking to abuse their freedom of speech.
He said technology like artificial intelligence is still years from effectively detecting most bad content because context is so important.
"For example, artificial intelligence isn't good enough yet to determine whether someone is pushing hate or describing something that happened to them so they can raise awareness of the issue".
It's also why we are publishing this information.
Since the fallout over political firm Cambridge Analytica obtaining millions of Facebook users' data without their permission, Facebook reiterated its commitment to being more transparent.
The report also covers fake accounts, which has gotten more attention in recent months after it was revealed that Russian agents used fake accounts to buy ads to try to influence the 2016 elections.