YouTube To Flag Up Misleading AI Clips
YouTube says video creators must reveal when they have used artificial intelligence tools. However, the rules only apply in specific circumstances.
According to YouTube, creators using AI is not a problem in itself. Instead it wants viewers to be better informed about "whether the content they're seeing is altered or synthetic."
The new requirement only applies when people use such tools create "realistic content", which YouTube defines as "content a viewer could easily mistake for a real person, place, scene or event." (Source: blog.youtube)
Animation OK
There's no need to label AI-based content that is "clearly unrealistic". This includes anything with animated content or special effects. These include background blurring, lighting filters and effects to make video look like it was made in a vintage era.
The rule also doesn't apply to cases where creators have used AI for associated tasks such as generating captions or writing scripts.
While the distinction between realistic and unrealistic can be blurry, YouTube says that context is key. For example, an AI-generated video of a tornado wouldn't necessarily need labeling. However, apparent footage of the tornado moving towards a real town would.
Warning Label
When creators do disclose they've used AI tools, a label will appear in the description of the video. However, in some cases the label will also appear more prominently in the video itself. This will include videos about sensitive topics such as elections, finance, health and news.
It doesn't appear YouTube is going to enforce the policy that heavily. It says it will only penalize creators who consistently fail to disclose AI use.
Rather oddly, YouTube also says it may add a label itself when the creator should have done so but failed. It's not clear if that will happen only when somebody reports a video or if YouTube has tools that can automatically spot videos requiring the label.
Tighter enforcement is coming for cases of AI content that falsely appears to show a real person's face or voice. Such videos aren't automatically banned, but the person portrayed will be able to request their removal. (Source: theregister.com)
What's Your Opinion?
Is this a sensible policy by YouTube? Does it matter if videos on the site are "genuine"? When, if ever, should AI-generated videos be labeled as such?
Most popular articles
- Which Processor is Better: Intel or AMD? - Explained
- How to Prevent Ransomware in 2018 - 10 Steps
- 5 Best Anti Ransomware Software Free
- How to Fix: Computer / Network Infected with Ransomware (10 Steps)
- How to Fix: Your Computer is Infected, Call This Number (Scam)
- Scammed by Informatico Experts? Here's What to Do
- Scammed by Smart PC Experts? Here's What to Do
- Scammed by Right PC Experts? Here's What to Do
- Scammed by PC / Web Network Experts? Here's What to Do
- How to Fix: Windows Update Won't Update
- Explained: Do I need a VPN? Are VPNs Safe for Online Banking?
- Explained: VPN vs Proxy; What's the Difference?
- Explained: Difference Between VPN Server and VPN (Service)
- Forgot Password? How to: Reset Any Password: Windows Vista, 7, 8, 10
- How to: Use a Firewall to Block Full Screen Ads on Android
- Explained: Absolute Best way to Limit Data on Android
- Explained: Difference Between Dark Web, Deep Net, Darknet and More
- Explained: If I Reset Windows 10 will it Remove Malware?
My name is Dennis Faas and I am a senior systems administrator and IT technical analyst specializing in cyber crimes (sextortion / blackmail / tech support scams) with over 30 years experience; I also run this website! If you need technical assistance , I can help. Click here to email me now; optionally, you can review my resume here. You can also read how I can fix your computer over the Internet (also includes user reviews).
We are BBB Accredited
We are BBB accredited (A+ rating), celebrating 21 years of excellence! Click to view our rating on the BBB.
Comments
AI warning
Since we know YouTube has their own agenda and has demonitized and blocked video later to be proven correct while at the same time allowing obvious propoganda and misinformation to propogate unhindered;
Since we know AI will make things up and double-down when caught;
YouTube would best serve us by being less arrogant about their meddling, possibly by putting their label(s) underneath the video title where the user can notice and then move on.
The last thing we need is some third-party butting in with their opinion. We can read the comments for that.