Did the EU just outlaw Skynet?

Understanding the AI Act's ban on high-risk AI practices
Did the EU just outlaw Skynet?

As Tim Harnett wrote about in February, "The EU has taken a proactive approach with the Artificial Intelligence Act, while the U.S. remains fragmented in its AI governance". An important part of the EU's regulation of AI is the AI Act, which defines 4 levels of risk and bans the highest level as "unacceptable risk". What forms of AI usage were banned?

The European Union has taken significant steps towards regulating artificial intelligence with its Artificial Intelligence Act. This legislation aims to ensure that AI is developed and used responsibly, respecting fundamental rights and safety standards. The AI Act entered into force on 1. August 2024. The prohibitions and AI literacy obligations apply since 2. February 2025. The governance rules and the obligations for general-purpose AI models become applicable on 2. August 2025, while the rules for high-risk AI systems - embedded into regulated products - have an extended transition period until 2. August 2027.

In other words, there are several different levels of the law, and I'm discussing the prohibitions against AI usages deemed to be of unacceptably high risk. These are considered unacceptable due to their potential harm to individuals' safety, livelihoods, and rights. Let's delve into each of these eight banned usages:

1. Harmful AI-based manipulation and deception

The EU seeks to ban AI that manipulates or deceives users for financial gain, political influence, or any other malicious purpose. Examples include deepfakes used to defame individuals or sway public opinion. This restriction is designed to protect citizens from misinformation campaigns and fraudulent activities.

2. Harmful AI-based exploitation of vulnerabilities

AI systems could potentially exploit human cognitive biases, physical impairments, or emotional states for personal gain. The EU considers this unethical and harmful, as it takes advantage of individuals' weaknesses. Banning such practices ensures that AI is used to empower users rather than exploiting them.

3. Social scoring

Social credit scores have been criticized for their potential misuse in China, leading the EU to ban AI systems that assess or predict a person's trustworthiness based on behavior and social interactions. This restriction safeguards individuals' privacy and prevents discrimination based on arbitrary criteria.

4. Individual criminal offence risk assessment or prediction

AI tools used for predictive policing have been found to perpetuate biases present in historical data, leading to disproportionate targeting of certain communities. The EU has thus banned the use of AI to predict individual criminal behavior, as it can infringe upon an innocent person's rights and lead to profiling.

5. Untargeted scraping of internet or CCTV material for facial recognition databases

Facial recognition systems have raised concerns over privacy infringements due to their extensive use of biometric data obtained without users' consent. The EU has banned the untargeted collection and processing of such data. This restriction ensures that individuals maintain control over their personal information.

6. Emotion recognition in workplaces and education institutions

Emotion AI can be used to monitor employees or students continuously, creating a stressful environment and infringing upon privacy rights. The EU has banned this practice as it undermines mental well-being and creates an imbalance of power between individuals and organizations.

7. Biometric categorisation to deduce certain protected characteristics

AI systems could potentially analyze biometric data to infer sensitive information like a person's race, political opinions, religious or philosophical beliefs, or sexual orientation, leading to discriminatory treatment. The EU has banned this practice as it violates fundamental rights and principles of equal treatment.

8. Real-time remote biometric identification for law enforcement purposes in publicly accessible spaces

Mass surveillance using real-time facial recognition raises significant privacy concerns and can lead to over-policing of certain areas or communities, creating a chilling effect on citizens' daily lives. The EU has thus banned this practice, except when necessary for specific, substantial public safety emergencies, and for the targeted search for specific victims or specific suspected committers of serious crime punishable by a maximum of at least four years.

In conclusion, the eight banned AI uses under the EU's AI Act are considered unacceptable due to their potential harm to individuals and society as a whole. They focus on individual privacy and rights. By regulating these high-risk applications of AI, the EU aims to foster innovation while protecting fundamental rights and promoting ethical development in artificial intelligence. As technology continues to advance, it is crucial that we remain vigilant in ensuring its responsible use.

To answer my own question in the title: No, the EU hasn't outlawed Skynet (the use of advanced AI to autonomously control military systems). Military use is specifically excluded from the scope of the regulation, as mentioned in its Article 2, for the simple reason that EU law currently doesn't cover defense. The EU member states each control their own armed forces. Opinions may differ on whether that's good or bad. The AI Act does however cover the use of such AI-powered systems when used for e.g. law enforcement purposes, so police probably can't employ Terminator-style killbots within the EU. Whew, that's a relief!

In keeping with the spirit of the AI Act, which emphasises the need for transparency around the use of AI in order to preserve trust, you should know that this article is partly written with the help of AI. I used the open NeMo model by the French company Mistral AI, running on my local computer. I asked it why the EU considers these eight usages unacceptable. I reviewed and expanded on the response, and corrected two halucinations (factual errors). I may have missed other errors or accidentally added some of my own.

Disclaimer: I am most certainly not a lawyer, and this article is not legal advice.

For reference: The AI Act Explorer helps you search and navigate within the AI Act.

Insights and News