Is Your Business Ready for the EU AI Act? (Part 2)

So… How Does the EU AI Act Work?

The EU AI Act takes a risk-based approach to regulating AI systems and General Purpose AI (GPAI) models. It classifies them into four categories:

  • Minimal Risk

  • Limited Risk

  • High Risk

  • Unacceptable Risk

Each risk level comes with its own set of requirements, and these also vary depending on an organization's role (e.g., provider, deployer, importer). Most of the AI Act's obligations (and their associated risks) center around high-risk systems.


Unacceptable Risk: The No-Go Zone

Let's start with the most straightforward category: unacceptable risk. These AI systems are simply prohibited in the EU because they pose a serious threat to the rights of individuals.

Here are some examples of unacceptable risk AI systems:

  • Systems that use manipulative or subliminal techniques.

  • Real-time remote biometric identification for law enforcement.

  • Systems that apply social scoring and classify people based on their behavior, socio-economic status, or personal characteristics, leading to discriminatory treatment.

  • Predictive policing based solely on profiling or personal characteristics (sounds a bit like Minority Report, right?).

  • Systems used for untargeted scraping of facial images from the internet or CCTV footage to create or expand facial recognition databases.

  • Systems that recognize emotions in workplaces or schools.

  • Systems that exploit vulnerabilities in individuals due to their age, physical or mental disabilities, socioeconomic status, or other traits to influence their behavior in harmful ways.

  • Systems that categorize individuals based on biometric data to deduce sensitive attributes like race, political opinion, or sexual orientation (excluding law enforcement with lawful datasets).

The potential for abuse in these cases is clearly high, and the consequences could be irreversible.

 

High-Risk Systems: Treading Carefully

High-risk systems are at the core of the AI Act's regulatory framework. An AI system is classified as high-risk based on its intended purpose. This means that careful analysis of each specific case is crucial to determine whether an AI system falls into this category.

An AI system is considered high-risk if:

  • It's used in products that already require a third-party conformity assessment under EU regulations, such as explosives, toys (which, by the way, you shouldn't combine!), and safety components in vehicles like self-driving cars.

  • It's listed in Annex III of the AI Act. This annex covers a wide range of applications, including biometric systems, critical infrastructure, education, employment, essential services, credit scoring, law enforcement, migration, asylum and border control, and the administration of justice and democratic processes.

Important exception: An AI system may not be considered high-risk if it doesn't pose a significant risk to individuals, such as when it's designed for a narrow, procedural task.

Providers of high-risk AI systems have several obligations under the AI Act, including:

  • Risk management

  • Data and data governance

  • Technical documentation

  • Record-keeping (using automated logs)

  • Transparency and provision of information to deployers

  • Human oversight

  • Accuracy, robustness, and cybersecurity

On top of these, additional obligations apply to providers, deployers, importers, and distributors of high-risk AI systems. These stakeholders must collaborate to uphold the principles of transparency, safety, and accountability to foster trust and innovation in the AI ecosystem.

For example, deployers must take appropriate measures to ensure they deploy high-risk AI systems according to instructions and maintain "AI literacy" to monitor the system and exercise human oversight. They must also report incidents and notify providers and distributors of any inappropriate or risky behavior by the AI system.

It's important to note that the AI Act's obligations often overlap with the GDPR, especially since AI systems frequently process personal data.

Importers and distributors also have their own set of obligations under the AI Act. (We'll save the finer details for a future deep dive!)

 

Limited Risk: Transparency is Key

Limited risk AI systems, also known as "Transparency Risk" systems, include those that interact directly with individuals or generate synthetic content like audio, images, videos, or text. These are common in B2C applications. Think AI chatbots, AI image generators, emotion recognition systems, deepfakes, and so on.

To prevent these systems from infringing on individual rights, the AI Act imposes transparency obligations on providers and deployers.

Providers must ensure that:

  • AI systems that interact directly with people (like ChatGPT) are designed to make users aware they are interacting with AI.

  • Systems that generate content (like audio or text files) clearly label their output as artificially generated or manipulated.

Deployers must:

  • Inform individuals when they are exposed to an emotion recognition or biometric categorization system and process their data in accordance with the GDPR.

  • Clearly disclose any deepfakes as artificially generated or manipulated.

  • Disclose when AI is used to create or manipulate text published to inform the public on matters of public interest.

 

Minimal Risk: Smooth Sailing (Mostly)

If your AI system doesn't fall into any of the higher-risk categories, congratulations! You have a "minimal risk" AI system and aren't subject to specific obligations under the AI Act. However, you still need to comply with other relevant laws, such as the GDPR (yes, it comes up a lot!).


Multi-Category AI Systems: The Juggling Act

What happens when an AI system meets the criteria for multiple risk categories? These are called "multi-category AI systems," and they can be tricky to navigate.

Organizations must comply with the requirements for each category their technology falls into. AI systems with biometric capabilities are a prime example.

  • Systems categorizing people based on biometric data to deduce sensitive attributes like race, political opinions, or sexual orientation fall under the Unacceptable Risk classification.

  • The definition of a high-risk AI system specifically includes biometric systems.

  • AI with biometric categorization is also considered limited risk.

As you can see, an AI system can be subject to multiple layers of obligations. It's also important to consider how a system might evolve. For example, a limited risk AI chatbot that learns to infer sensitive attributes for marketing purposes could become an Unacceptable Risk.

 

GPAI Models: A Chapter of Their Own

The AI Act dedicates an entire chapter to regulating General Purpose AI models. A GPAI model:

  • Is an AI model, not a system (though it may be integrated into one).

  • Is trained on a massive amount of data with self-supervision (think ChatGPT 3, trained on over 570 gigabytes of data!).

  • Is general-purpose and can perform a wide variety of tasks.

GPAI models are regulated based on whether they pose a systemic risk. This is determined by the model's size or if the European Commission designates it as a systemic risk.

Providers of GPAI models must:

  • Maintain and provide technical documentation to authorities and downstream providers.

  • Comply with EU copyright law.

  • Cooperate with the European Commission and national authorities.

  • Publish a summary of the training data.

  • Appoint an authorized representative in the EU.

Providers of GPAI models that present systemic risks have additional obligations, such as:

  • Performing model evaluation and adversarial testing.

  • Assessing and mitigating systemic risks at the EU level.

  • Ensuring adequate cybersecurity protection.


Enforcement and Penalties

Like the GDPR, the AI Act includes administrative fines for violations. These penalties are designed to be effective, proportionate, and dissuasive.

Maximum penalties under the AI Act:

  • €35 million or 7% of worldwide annual turnover for using a prohibited AI system.

  • €15 million or 3% of worldwide annual turnover for non-compliance with other obligations.

  • €7.5 million or 1% of worldwide annual turnover for providing incorrect or misleading information to regulators.

Unlike the GDPR, the AI Act considers business size when determining fines. For large organizations, the maximum fine is the greater of the percentage or amount. For startups and SMEs, it's the lower of the two.

 

Implementation Timeline

The AI Act was enacted on August 2, 2024, but its enforcement will roll out in phases:

  • February 2, 2025: Ban on prohibited AI systems.

  • August 2, 2025: Obligations for providers of GPAI models.

  • August 2, 2026: Remaining obligations.

  • August 2, 2027: Obligations for AI systems that are products requiring third-party conformity assessment and AI systems used as safety components in such products.

 

Your Unique Path to Compliance

This introduction to the AI Act provides a general overview. How the AI Act applies to you will depend on several factors, including the nature of your AI system, your role, its interaction with other laws like the GDPR, and your business model.

I hope this overview has been helpful! I look forward to diving deeper into this topic in the future. In the meantime, don't forget to subscribe to our YouTube channel for more video content or reach out to schedule a complimentary consultation today.

Previous
Previous

CCPA Compliance in 2025: Updates to fines & Penalties

Next
Next

Is Your Business Ready for the EU AI Act? (Part 1)