Facebook Fails to Take Down Images Exploiting Children

John Lister's picture

A major news organization says Facebook doesn't do enough to remove images that exploit children. The British Broadcasting Corporation (BBC) says the site removed just 18 out of 100 images that it reported as part of an investigation.

The test was part of a follow-up to a report last year about people using members-only groups on the site to share inappropriate images of children. At the time, Facebook said it was improving its moderation system.

To see if that was the case, the BBC used Facebook's formal reporting button to report 100 images that appeared to breach the site's stated guidelines. They included a mixture of explicit images along with those that were not inherently explicit but had been posted with inappropriate comments in groups dedicated to child exploitation.

Only One in Five Pics Removed

The BBC waited until it had a response for every image. 18 were removed, but for each of the remaining 82, Facebook sent an automated message saying the image didn't breach community standards. (Source: bbc.co.uk)

The figures raise questions about exactly how Facebook assessed the images in question. That the pictures remained online may suggest the moderation process was highly automated and didn't assess the context of the images and the pages on which they appeared.

Facebook's rules also bar anyone convicted of particular crimes against children from having a profile on the site. The BBC says it reported five accounts that breached these rules, but none were removed.

BBC Interview Prompts Facebook to Crime Authority

When the BBC asked for an interview, Facebook agreed but said it would first need to see examples of the material that the BBC had reported and was still online. When the BBC provided the examples, Facebook immediately reported the infractions to the National Crime Agency - the body that coordinates efforts by regional police forces to fight serious and organized crime.

Facebook has since issued a statement which reads: "We have carefully reviewed the content referred to us and have now removed all items that were illegal or against our standards. This content is no longer on our platform. We take this matter extremely seriously and we continue to improve our reporting and take-down measures. Facebook has been recognized as one of the best platforms on the Internet for child safety." (Source: techcrunch.com)

What's Your Opinion?

Is Facebook doing enough to remove inappropriate images? Should images of children that aren't inherently illegal be removed if it's clear they are being used in an exploitative fashion? Is it practical to apply nuance and context to moderation on a site with as much content as Facebook?

Rate this article: 
Average: 5 (5 votes)

Comments

Dennis Faas's picture

Based on the article it sounds like the majority of the infractions are met with automated algorithms to determine if pictures are in violation. While automated image recognition software has seen leaps and bounds over the last decade or so, only a human being can determine whether or not an infraction has taken place.

With that said, based on the sheer volume of people that use Facebook - policing complaints 24/7 would likely be very impractical. Also with the "members only" groups, I don't see how others could make complaints about the violations without being part of the group unless the group name is visible to the public and says exactly what it is.

Simply put: this is not an easy thing to beat - and ranks right up there with eliminating piracy and computer viruses. Every time science takes a step forward to automate and combat such issues, others will find a way around it.