Google: Robots Better than Humans at Spotting Extremist Videos
Google says its artificial intelligence system is better at identifying guideline-breaking videos than human moderators. The claim is somewhat subjective, however.
The company, which runs YouTube, made the claim as part of an update on how it deals with 'questionable' videos.
Last month, Google said it was refining its use of computerized systems for examining videos and checking whether they breach content guidelines. It says that as a result of the improvements, three-quarters of removed videos are now caught automatically before the company receives a complaint from a human viewer.
AI Works Quickly and Accurately
It also says the accuracy with which the automated system has improved dramatically and "in many cases our systems have proven more accurate than humans at flagging videos that need to be removed." That's an arguable point, however, as whether a video breaches the rules is ultimately a subjective issue. (Source: telegraph.co.uk)
The improvements to the automated reviewing were designed to do a better job of assessing context. For example, the same images used in a news report might be acceptable on YouTube, while being banned when used as part of a pro-terrorist recruitment video.
As well as being faster and sometimes more accurate, the key to the automated systems is that they can be scaled far more easily than recruiting more human moderators. In the past month Google says it's been able to double the number of extremist videos taken down and halve the average time they stay up before being removed.
Controversial Content Carries No Ads
There's still room for humans, however. Google says that since its last update on the topic, it has added 15 organizations to its list of advisors who help it identify extremist content as radicalization tactics.
Google also said it's just about ready to start implementing a new classification of videos that don't break the precise wording of its terms and conditions but are near the knuckle. It says videos which contain "controversial religious or supremacist content" will be placed in a special "limited state" category that means they don't carry advertisements, have comments switched off, and don't have the usual "like/dislike" feature. (Source: googleblog.com)
What's Your Opinion?
Do you trust automated systems to accurately detect a video that is offensive and breaks guidelines? Are such systems the only way to cope with the sheer number of videos YouTube hosts? Is the "limited state" category a sensible way to deal with controversial content?
Most popular articles
- Which Processor is Better: Intel or AMD? - Explained
- How to Prevent Ransomware in 2018 - 10 Steps
- 5 Best Anti Ransomware Software Free
- How to Fix: Computer / Network Infected with Ransomware (10 Steps)
- How to Fix: Your Computer is Infected, Call This Number (Scam)
- Scammed by Informatico Experts? Here's What to Do
- Scammed by Smart PC Experts? Here's What to Do
- Scammed by Right PC Experts? Here's What to Do
- Scammed by PC / Web Network Experts? Here's What to Do
- How to Fix: Windows Update Won't Update
- Explained: Do I need a VPN? Are VPNs Safe for Online Banking?
- Explained: VPN vs Proxy; What's the Difference?
- Explained: Difference Between VPN Server and VPN (Service)
- Forgot Password? How to: Reset Any Password: Windows Vista, 7, 8, 10
- How to: Use a Firewall to Block Full Screen Ads on Android
- Explained: Absolute Best way to Limit Data on Android
- Explained: Difference Between Dark Web, Deep Net, Darknet and More
- Explained: If I Reset Windows 10 will it Remove Malware?
My name is Dennis Faas and I am a senior systems administrator and IT technical analyst specializing in cyber crimes (sextortion / blackmail / tech support scams) with over 30 years experience; I also run this website! If you need technical assistance , I can help. Click here to email me now; optionally, you can review my resume here. You can also read how I can fix your computer over the Internet (also includes user reviews).
We are BBB Accredited
We are BBB accredited (A+ rating), celebrating 21 years of excellence! Click to view our rating on the BBB.
Comments
Sure, use robots as the "bad cop."
Google, YouTube, and Facebook are becoming extreme in shutting down conservative viewpoints while they allow known terrorist organizations to post videos.
Since they are virtual monopolies, they are able to stop all political speech of conservatives while they promote speech of socialists.
This is extremely dangerous! Oh yes, robots help their human leftists root out conservative viewpoints and demonetize them and shut them down. Political speech of one side is removed while political speech of the other side is promoted.
Is my freedom of speech limited to only speech that you agree with? If so, then it is not free speech at all. True freedom of speech is the right to say things that others do not like. Don't restrict free speech!
I trust an automated system
I trust an automated system to do pre-screening when it comes to subjective material. As the article stated, the same image could be used in a good context as well as bad.
As for handling large numbers of new material, institute a delay function for the <automated> system to rubber stamp each submission. Questionable items get delayed for further analysis or human decision.
I really don't know of any news source that is truly neutral, without agenda, and Just-The-Facts. If an agency wants respect as a news source, they shouldn't be slanted. It just undermines their credibility. It's their 'freedom of speech'. I do not have to listen or care about what they have to say.
Google is a global, profit-driven company. Not even close to being a neutral/for the greater good/betterment of mankind entity. The 'limited state' category is Google's manipulation of material.
Wouldn't that be nice....a single global unbiased news source! Shut down the propaganda machines, expose cover-ups, and let the world know what really goes on on the other side.