Use of AI Could Be Banned or Restricted

John Lister's picture

Proposed laws in Europe would ban some forms of artificial intelligence while restricting others. But critics say the proposals are too vague to be workable in practice.

The proposals, expected to be formally published in the next few weeks, come from the European Commission. If approved by the European Parliament they could become a regulation that takes immediate legal effect across all European Union countries. It's not yet clear if and how the rules would extend to businesses in other countries as happens with data protection rules.

Under the proposals, only AI used by the military or for public security purposes would be exempt. AI in general policing would likely come under the rules. (Source:

The rules would completely ban some forms of AI including systems used for indiscriminate surveillance. Also outlawed would be AI designed to manipulate human behavior or decision-making to the person's detriment, systems used for "social scoring" and systems used to target people's vulnerabilities.

"Minority Report" Systems Restricted

That's led to suggestions the rules would be unworkable as such definitions are too subjective and could have unintended consequences for AI uses that most people would consider reasonable and legitimate.

The rules would also class some AI as "high risk". These would require approval and oversight from national regulatory bodies. The category includes some algorithms that are widely known to be used already such as rating creditworthiness, recruitment, and allocating educational places.

It also covers uses of AI that would certainly be controversial such as predicting crimes and setting priorities for responding to emergency service calls.

Kill Switch Engaged

Under the rules, high risk AI systems would also need a "kill switch" feature that allowed operators to immediately stop the system operating if necessary.

Another measure would require businesses to reveal if they had used so-called "deepfakes" where AI creates manipulated images or video designed to fool users into thinking they were looking at genuine material involving humans, for example, changing what the person actually said.

The most serious penalty under the rules would be for developing a banned system or failing to reveal correct information about a high risk system. That would carry a maximum fine of four percent of a company's global revenue. (Source:

What's Your Opinion?

Would you like to see similar laws where you live? Can such definitions and restrictions really work in practice? Should there be any place for laws governing artificial intelligence?

Rate this article: 
Average: 4.9 (13 votes)