Date:

EU Bans AI Systems with Unacceptable Risk

EU’s AI Act Enters Compliance Phase

As of Sunday, the European Union’s regulators can ban the use of AI systems they deem to pose "unacceptable risk" or harm.

Compliance Deadline

February 2 is the first compliance deadline for the EU’s AI Act, the comprehensive AI regulatory framework that the European Parliament finally approved last March after years of development. The act officially went into force August 1; what’s now following is the first of the compliance deadlines.

Risk Levels

The specifics are set out in Article 5, but broadly, the Act is designed to cover a myriad of use cases where AI might appear and interact with individuals, from consumer applications through to physical environments. Under the bloc’s approach, there are four broad risk levels: (1) Minimal risk (e.g., email spam filters) will face no regulatory oversight; (2) limited risk, which includes customer service chatbots, will have a light-touch regulatory oversight; (3) high risk — AI for healthcare recommendations is one example — will face heavy regulatory oversight; and (4) unacceptable risk applications — the focus of this month’s compliance requirements — will be prohibited entirely.

Unacceptable Activities

Some of the unacceptable activities include:

  • AI used for social scoring (e.g., building risk profiles based on a person’s behavior).
  • AI that manipulates a person’s decisions subliminally or deceptively.
  • AI that exploits vulnerabilities like age, disability, or socioeconomic status.
  • AI that attempts to predict people committing crimes based on their appearance.
  • AI that uses biometrics to infer a person’s characteristics, like their sexual orientation.
  • AI that collects "real time" biometric data in public places for the purposes of law enforcement.
  • AI that tries to infer people’s emotions at work or school.
  • AI that creates — or expands — facial recognition databases by scraping images online or from security cameras.

Fines and Enforcement

Companies that are found to be using any of the above AI applications in the EU will be subject to fines, regardless of where they are headquartered. They could be on the hook for up to €35 million (~$36 million), or 7% of their annual revenue from the prior fiscal year, whichever is greater.

Preliminary Pledges

The February 2 deadline is in some ways a formality. Last September, over 100 companies signed the EU AI Pact, a voluntary pledge to start applying the principles of the AI Act ahead of its entry into application. As part of the Pact, signatories — which included Amazon, Google, and OpenAI — committed to identifying AI systems likely to be categorized as high risk under the AI Act.

Possible Exemptions

There are exceptions to several of the AI Act’s prohibitions. For example, the Act permits law enforcement to use certain systems that collect biometrics in public places if those systems help perform a "targeted search" for, say, an abduction victim, or to help prevent a "specific, substantial, and imminent" threat to life. This exemption requires authorization from the appropriate governing body, and the Act stresses that law enforcement can’t make a decision that "produces an adverse legal effect" on a person solely based on these systems’ outputs.

Conclusion

The European Union’s AI Act is a significant step towards regulating the use of AI in the region. As the compliance deadline approaches, companies must ensure that they are not using AI systems that pose unacceptable risk or harm. The Act’s prohibitions and exceptions are complex, and companies must carefully review the guidelines and standards to ensure compliance.

FAQs

Q: What is the EU’s AI Act?
A: The EU’s AI Act is a comprehensive AI regulatory framework that aims to regulate the use of AI in the European Union.

Q: What is the purpose of the AI Act?
A: The purpose of the AI Act is to ensure that AI systems are developed and used in a way that is safe, transparent, and respects human rights.

Q: What are the risk levels under the AI Act?
A: The AI Act categorizes AI systems into four risk levels: minimal risk, limited risk, high risk, and unacceptable risk.

Q: What are the unacceptable activities under the AI Act?
A: The AI Act prohibits the use of AI systems that manipulate a person’s decisions subliminally or deceptively, exploit vulnerabilities like age, disability, or socioeconomic status, and other activities that pose unacceptable risk or harm.

Q: What are the fines for non-compliance with the AI Act?
A: Companies found to be using AI systems that pose unacceptable risk or harm can be fined up to €35 million (~$36 million), or 7% of their annual revenue from the prior fiscal year, whichever is greater.

Latest stories

Read More

LEAVE A REPLY

Please enter your comment!
Please enter your name here