Next Week Marks the Beginning of a New Era for AI Regulations
The European Union’s AI Act is set to take effect, marking a significant milestone in the regulation of artificial intelligence. As of February 2nd, companies across the globe that operate in the EU must navigate a new regulatory landscape with strict rules and high stakes.
The initial phase of the EU AI Act prohibits the deployment or use of several high-risk AI systems, including social scoring, emotion recognition, real-time remote biometric identification in public spaces, and other scenarios deemed unacceptable under the Act. Companies found in violation of the rules could face penalties of up to 7% of their global annual turnover, making it crucial for organizations to understand and comply with the restrictions.
Early Compliance Challenges
“It’s finally here,” says Levent Ergin, Chief Strategist for Climate, Sustainability, and AI at Informatica. “While we’re still in a phased approach, businesses’ hard-earned preparations for the EU AI Act will now face the ultimate test.”
Ergin highlights that even though most compliance requirements won’t take effect until mid-2025, the early prohibitions set a decisive tone.
“The pressure in 2025 is twofold. They must demonstrate tangible ROI from AI investments while navigating challenges around data quality and regulatory uncertainty. It’s already the perfect storm, with 89% of large businesses in the EU reporting conflicting expectations for their generative AI initiatives. At the same time, 48% say technology limitations are a major barrier to moving AI pilots into production,” he remarks.
Ergin believes the key to compliance and success lies in data governance.
“Without robust data foundations, organizations risk stagnation, limiting their ability to unlock AI’s full potential. After all, isn’t ensuring strong data governance a core principle that the EU AI Act is built upon?”
EU AI Act Has No Borders
The extraterritorial scope of the EU AI Act means non-EU organizations are assuredly not off the hook. As Marcus Evans, a partner at Norton Rose Fulbright, explains, the Act applies not only to organizations in the EU using AI or those providing, importing, or distributing AI to the EU market, but also AI provision and use where the output is used in the EU.
“The AI Act will have a truly global application,” says Evans. “That’s because it applies not only to organizations in the EU using AI or those providing, importing, or distributing AI to the EU market, but also AI provision and use where the output is used in the EU. So, for instance, a company using AI for recruitment in the EU – even if it is based elsewhere – would still be captured by these new rules.”
Encouraging Responsible Innovation
The EU AI Act is being hailed as a milestone for responsible AI development. By prohibiting harmful practices and requiring transparency and accountability, the regulation seeks to balance innovation with ethical considerations.
“This framework is a pivotal step towards building a more responsible and sustainable future for artificial intelligence,” says Beatriz Sanz Sáiz, AI Sector Leader at EY Global.
Sanz Sáiz believes the legislation fosters trust while providing a foundation for transformative technological progress.
“It has the potential to foster further trust, accountability, and innovation in AI development, as well as strengthen the foundations upon which the technology continues to be built,” Sanz Sáiz asserts.
“It is critical that we focus on eliminating bias and prioritizing fundamental rights like fairness, equity, and privacy. Responsible AI development is a crucial step in the quest to further accelerate innovation.”
What’s Prohibited under the EU AI Act?
To ensure compliance, businesses need to be crystal-clear on which activities fall under the EU AI Act’s strict prohibitions. The current list of prohibited activities includes:
• Harmful subliminal, manipulative, and deceptive techniques
• Harmful exploitation of vulnerabilities
• Unacceptable social scoring
• Individual crime risk assessment and prediction (with some exceptions)
• Untargeted scraping of internet or CCTV material to develop or expand facial recognition databases
• Emotion recognition in areas such as the workplace and education (with some exceptions)
• Biometric categorization to infer sensitive categories (with some exceptions)
• Real-time remote biometric identification (RBI) in publicly accessible spaces for law enforcement purposes (with some exceptions)
A New Landscape for AI Regulations
The early implementation of the EU AI Act represents just the beginning of what is a remarkably complex and ambitious regulatory endeavour. As AI continues to play an increasingly pivotal role in business strategy, organizations must learn to navigate new rules and continuously adapt to future changes.
For now, businesses should focus on understanding the scope of their AI use, enhancing data governance, educating staff to build AI literacy, and adopting a proactive approach to compliance. By doing so, they can position themselves as leaders in a fast-evolving AI landscape and unlock the technology’s full potential while upholding ethical and legal standards.
FAQs:
Q: What is the EU AI Act?
A: The EU AI Act is a regulation aimed at ensuring the development and deployment of AI is safe and trustworthy.
Q: What is the scope of the EU AI Act?
A: The EU AI Act applies to organizations in the EU using AI, as well as those providing, importing, or distributing AI to the EU market, or using AI where the output is used in the EU.
Q: What are the prohibited activities under the EU AI Act?
A: The EU AI Act prohibits harmful subliminal, manipulative, and deceptive techniques, as well as other activities, including social scoring, emotion recognition, and biometric categorization.
Q: What are the penalties for non-compliance with the EU AI Act?
A: Companies found in violation of the EU AI Act could face penalties of up to 7% of their global annual turnover.