The EU AI Act: Final Status, Key Rules, and What's Next

Published May 16, 2026 1 reads

Let's cut to the chase. The EU AI Act is no longer a "proposed" piece of legislation. It's done. After years of debate, political wrangling, and last-minute negotiations over models like ChatGPT, the world's first comprehensive AI law was formally adopted in mid-2024. The final text is published in the Official Journal of the European Union, marking the official start of the clock for compliance.

If you're running a tech company, selling software in Europe, or using AI in any part of your operations, this isn't just another regulatory update. It's a foundational shift in how AI systems are built, sold, and used. The problem is, the official documents are dense, filled with legal jargon, and frankly, a bit overwhelming. Everyone's talking about "high-risk" systems and "prohibited practices," but what do those terms actually mean for your product roadmap or your HR department's new resume-screening tool?

I've been tracking this legislation since its first draft, and I've seen the confusion firsthand. Many founders assume it only applies to massive "Big Tech" firms. That's a dangerous misconception. The obligations can trickle down to startups, SMEs, and even non-EU companies targeting the European market. This guide breaks down the EU AI Act from a practical, operational perspective—what's banned, what's heavily regulated, and the concrete steps you need to take now.

Where Things Stand: The Official Timeline

The legislative marathon is over. Here’s the critical path from law to enforcement.

The Countdown Has Begun

The Act entered into force 20 days after its publication in the Official Journal (which happened in August 2024). But don't breathe a sigh of relief just yet. The requirements stagger in. This phased approach is both a blessing and a curse—it gives you time, but it also means you can't afford to wait.

The deadlines aren't suggestions; they're hard stops. Missing them means operating illegally in the EU.

Deadline What Happens Who Needs to Pay Attention
6 Months After Entry into Force (Early 2025) The bans on prohibited AI practices apply. You can no longer deploy or use systems that fall into these categories. Everyone. From social media platforms to private companies using employee monitoring tools.
12 Months After Entry into Force (Mid-2025) Rules for General-Purpose AI (GPAI) models kick in. This includes transparency obligations for all GPAI models and stricter requirements for models deemed to have "systemic risk." Developers of foundation models (like OpenAI, Anthropic, Meta, Mistral) and companies integrating them.
24 Months After Entry into Force (Mid-2026) The full set of rules for high-risk AI systems becomes mandatory. This is the big one for many B2B and enterprise software providers. Providers and deployers of AI in sectors like healthcare, machinery, education, HR, and law enforcement.
36 Months After Entry into Force (Mid-2027) Extension of high-risk rules to AI systems embedded in regulated products (like medical devices under existing EU laws). Manufacturers in the medical device, aviation, and automotive sectors.

Notice the first deadline is already on the horizon. If your company uses any form of AI for subliminal manipulation or real-time biometric surveillance in public spaces, you have less than a year to dismantle it.

The Core Rules: Risk, Bans, and Transparency

The Act's genius—and its complexity—lies in its risk-based pyramid. Not all AI is treated equally.

1. Unacceptable Risk: The Absolute Bans

These practices are outlawed. Full stop. The most critical ones to understand are:

  • Manipulative AI: Systems that deploy subliminal techniques beyond a person’s consciousness to materially distort behavior in a way that causes harm. Think AI that secretly influences you to make detrimental financial decisions.
  • Social Scoring: Evaluating or classifying people based on social behavior or predicted personal traits, leading to unjustified detrimental treatment. This directly targets systems like China's social credit concept.
  • Real-Time Remote Biometric Identification: Using AI for facial recognition in publicly accessible spaces for law enforcement. There are extremely narrow exceptions (like searching for a missing child or preventing a terrorist attack), but these require prior judicial authorization. Many cities using this tech will have to shut it down.

A common mistake? Companies think "we don't do facial recognition, so we're fine." But the ban on manipulative AI is broad and fuzzy. Could your hyper-personalized marketing campaign, designed to exploit cognitive biases, be construed as manipulative? It's a grey area that lawyers will be debating for years.

2. High-Risk AI: The Heavy Compliance Lift

This is the category that will consume most compliance budgets. High-risk AI systems are those used in critical areas listed in the Act's Annexes. Key sectors include:

  • Biometrics: (Non-real-time uses like immigration control).
  • Critical Infrastructure: (Managing traffic, utilities).
  • Education & Vocational Training: (Systems for determining access, scoring exams).
  • Employment & HR: This is a huge one. Tools for resume screening, candidate ranking, and performance evaluation.
  • Essential Services: (Credit scoring, determining access to social benefits).
  • Law Enforcement & Justice: (Assessing evidence, predicting offenses).
  • Migration & Border Control:

If your AI falls into one of these areas, you're looking at a mountain of obligations: rigorous risk assessments, high-quality datasets, detailed documentation (a "technical file"), human oversight, and robust accuracy/cybersecurity standards. Conformity assessment is required, which for some systems means involving a notified third-party body.

The pain point for HR tech companies is particularly acute. That nifty AI tool you bought to filter 10,000 resumes down to 100? It's now a high-risk system. You, as the deployer, have obligations to ensure it's compliant, used with human oversight, and its decisions can be explained.

3. Limited Risk & Minimal Risk: Transparency and Light Touch

For systems like chatbots or emotion recognition software, the main requirement is transparency. Users must be informed they are interacting with an AI. It's a straightforward rule, but crucial for trust.

Everything else—like an AI for optimizing warehouse logistics or a spam filter—falls into minimal risk with no specific obligations, though general rules on accountability still apply.

The Big Fight: Rules for General-Purpose AI (GPAI)

This was the last-minute battleground. The original draft of the Act didn't properly address foundation models like GPT-4. The final version creates a new, layered regime for General Purpose AI.

All GPAI model providers must create technical documentation, comply with copyright law (a nod to the training data lawsuits), and provide detailed information to downstream companies that integrate their model.

Then, there's the "systemic risk" tier. If a model is trained with computing power exceeding 10^25 FLOPs (a threshold only the most powerful models today meet), it gets extra obligations: conduct model evaluations, assess and mitigate systemic risks, report serious incidents, and ensure robust cybersecurity. The European AI Office will monitor these models closely.

This creates a tricky supply chain. If you're a startup building an app on top of a "systemic risk" model, you're dependent on the provider's compliance. Your due diligence checklist just got longer.

Who This Hits: Impact on Different Businesses

Let's get specific. How does this translate to real-world scenarios?

For a US-based SaaS Startup Selling HR Software in Germany: Your candidate scoring module is high-risk. You must ensure it complies with all requirements before the 2026 deadline. This means auditing your training data for bias, building the technical documentation, and potentially going through a conformity assessment. Your price in Europe just went up.

For a European FinTech Using AI for Fraud Detection: This is likely high-risk (essential services). You need to implement human oversight loops where a person reviews and can override the AI's "high-risk" fraud flags. Your system's accuracy metrics need to be documented and maintained.

For a Retail Company Using Customer Analytics AI: If you're just analyzing shopping patterns for stock management (minimal risk), breathe easy. But if you're using emotion recognition via in-store cameras to tailor ads (limited risk), you need clear signage informing customers. If that analytics system is used to unfairly deny someone store credit, you might be veering into high-risk territory.

The extraterritorial reach is key. Like the GDPR, the EU AI Act applies to providers placing systems on the EU market, regardless of where they are established, and to users of AI systems located within the EU.

What to Do Next: A Practical Compliance Checklist

Don't panic. Start with a systematic inventory. This is my recommended first-phase approach, based on helping several companies through the initial scramble.

  1. Conduct an AI Inventory: Catalog every AI system you develop, sell, or use. Don't forget the "hidden" ones embedded in third-party software you license.
  2. Map to the Risk Pyramid: For each system, determine its likely classification: prohibited, high-risk, limited risk, or minimal risk. Get legal counsel to validate your high-risk assessments.
  3. Prioritize by Deadline: Address any potentially prohibited uses immediately. Then, focus on your high-risk systems for the 2026 deadline, but start the groundwork now—these projects take time.
  4. Gap Analysis for High-Risk Systems: For each high-risk candidate, audit it against the requirements: data quality, documentation, transparency, human oversight, accuracy, and robustness. Identify the gaps.
  5. Engage with Your GPAI Providers: If you rely on models from OpenAI, Google, etc., start asking them how they plan to comply with the EU AI Act and what information they will provide to you.
  6. Build Internal Governance: Assign responsibility. Someone needs to own this. Consider forming an interdisciplinary committee with tech, legal, and business leads.

Resources like the European Commission's website and guidance from national authorities (as they become available) will be essential. The UK's ICO also provides useful, parallel guidance on AI and data protection that complements the EU rules.

Clearing Up the Confusion: Your EU AI Act FAQs

We're a small startup using an API from a major AI company for a creative tool. Are we "providers" under the Act?

It depends on your level of modification. If you're simply calling the API and presenting the output, you're likely a "deployer" or "user," with lighter obligations (like ensuring human oversight if it's high-risk). If you fine-tune the model significantly with your own data and integrate it deeply into a product you sell, you might be considered a provider and take on the full compliance burden. The line is fine, and legal advice is crucial here.

What are the actual penalties for non-compliance?

They are severe, designed to be deterrents. Fines can be up to 35 million euros or 7% of global annual turnover (whichever is higher) for violations of the banned AI practices. For other violations, like failing to meet high-risk requirements, fines go up to 15 million euros or 3% of turnover. For SMEs and startups, the percentage-based fines could be existential. It's not a slap on the wrist.

Does the Act kill open-source AI development?

This was a major debate. The final text includes some exemptions to reduce the burden on open-source developers. If you're releasing an AI model under a free and open-source license, and you're not providing it as a commercial service, many of the GPAI obligations don't apply. However, if your open-source model is deemed to have systemic risk, you're not fully off the hook. The community is still analyzing the final language, but the intent was to protect non-commercial research and collaboration.

How does this interact with the GDPR?

They are separate but overlapping laws. The GDPR governs personal data processing. The AI Act governs the AI system itself. If your high-risk AI system processes personal data (most do), you must comply with both. The AI Act references GDPR principles like data minimization and purpose limitation. In practice, your Data Protection Officer and your new AI governance team need to work closely together. A compliance failure under one law likely indicates a problem under the other.

We only sell to businesses (B2B). Are we exempt?

Absolutely not. This is a critical misconception. The classification as high-risk depends on the use case, not the customer type. An AI system used for recruitment (B2B sale to an HR department) is explicitly listed as high-risk. The Act cares about the impact on the end-subject (the job candidate), not the commercial relationship between companies.
Next A Unique U.S. Tightening Cycle

Comment desk

Leave a comment