Facebook Reveals Content Removal Stats

John Lister's picture

Facebook says it deleted 583 million fake accounts in the first three months of the year. That means on average three to four percent of active accounts in this period were bogus.

The figure comes in the company's first Community Standards Enforcement Report, which gives statistics about the action it takes over content that breaks its rules. It's part of an effort to improve transparency about the site and follows last month's publication of the full detail of the 'community standards' for the first time.

Spam The Most Prevalent Problem

As well as improving transparency, the figures are likely designed to act as public relations in a couple of ways. One is to show the sheer scale of rule-breaching content Facebook has to deal with. The other is to highlight the efforts it makes to automatically find and remove such content rather than wait for a user to report it.

The biggest figure is for spam, with Facebook saying it took down 837 million pieces of spam in the quarter, virtually all of it before anyone reported it. That sounds impressive, though many users may not understand what Facebook considers spam and whether there's any point reporting it. Indeed, many users may be unclear about how and why marketing material shows up in their news feed. (Source: fb.com)

Context Still a Quandary

The automated vetting systems do seem to have worked well in some areas. Facebook removed 21 million pieces of content that breached its rules on mature content, of which 96 percent was caught before being reported. Meanwhile 3.5 million examples of graphic violence were either removed or given a warning label, of which 86 percent were found by the automated systems.

The systems don't work so well on hate speech: of the 2.5 million pieces removed, only 38 percent were picked up by the automated technology rather than reported by a user. Facebook says that's largely because defining hate speech depends so much on context. It gives the example of somebody posting on the site to say that they had been the victim of verbal abuse in the 'real world' and reporting the words that were used - something that doesn't usually breach the rules. (Source: gizmodo.com)

What's Your Opinion?

Do any of these stats surprise you? Should Facebook hire more staff to vet content that might breach its rules? Is artificial intelligence and automated assessment the only way to go when dealing with so much content?

Rate this article: 
Average: 5 (5 votes)