How to Navigate the EU AI Act

Introduction
The EU AI Act, which took effect in August 2024, mandates that organizations classify AI systems into four risk categories: Unacceptable, High, Limited, or Minimal (see Figure 1 below). Each category carries distinct compliance requirements that must be met within a specified enforcement timeline.

In this blog, we’ll break down the enforcement timeline and what organizations need to do at each stage to stay compliant.
Key Dates for the EU AI Act Enforcement Timeline
The EU AI Act is being phased in gradually as shown below and in Figure 2:
- August 2024 – The Act enters into force.
- February 2025 – Banned AI practices and AI literacy provisions take effect.
- August 2025 – Rules for foundation models and governance structures apply.
- August 2026 – Full enforcement of high-risk AI compliance requirements.
- August 2027 – Final obligations (Article 6(1)) are implemented.

Details on Important Dates for the EU AI Act
February 2, 2025 – Ban on Prohibited AI & AI Literacy Initiatives
The EU has already enforced a ban on Unacceptable Risk AI systems. This includes AI systems that manipulate human behavior in a harmful way, biometric categorization based on sensitive data such as race, religion, and sexual orientation, social scoring by governments or private entities, and real-time biometric surveillance in public spaces, with some exceptions for law enforcement. Here are steps for organizations to follow:
- Conduct an AI audit to check if any of your AI systems fall under prohibited categories.
- Implement internal training programs to improve AI literacy among employees.
- Ensure AI-powered applications comply with data protection and privacy laws.
In addition, organizations will need to start AI literacy initiatives to promote awareness of AI risks and ensure responsible AI usage.
August 2, 2025 – Rules for General-Purpose AI (GPAI) & Governance Structures
Organizations developing General-purpose AI models (GPAI), like large language models (LLMs) and Small Language Models (SLMs) need to perform the following steps:
- Conduct AI model risk assessments documenting the risks.
- Implement transparency measures for AI users.
- Establish an internal governance framework for AI oversight.
With Enkrypt AI, organizations building General Purpose AI models can:
- Conduct risk assessments with Red Teaming Agents. Risk assessments of over 100 models are publicly available at Enkrypt AI Safety Leaderboard. Find a sample Red Teaming report of DeepSeek R1 model here.
- Get Safety Alignment data for the vulnerabilities found. The alignment process reduces the model risk by up to 70%. Enkrypt AI Safety Leaderboard also contains risk results for models aligned by Enkrypt AI. Some popular models we ran alignment on are IBM Granite 3.1 8B, DeepSeek R1 Distill Lamma, Jamba 1.5 Mini and Aya 23 8B.
- Build real time safety, integrity, compliance, and moderation with Enkrypt AI Guardrails. Check out our Guardrails Demo and Enkrypt AI docs for information on all the guardrails.
August 2, 2026 – Full Compliance for High-Risk AI Systems
August 2026 is one of the critical compliance deadlines for organizations using high-risk AI systems in sectors such as healthcare, education and financial services. Organizations deploying high-risk AI systems need to:
- Identify all high-risk AI applications.
- Develop and document risk mitigation strategies for AI-based decision-making.
- Appoint an AI compliance officer or create an AI risk management team.
- Create an AI Governance system to detect compliance breaches.
Implementing these security measures for High-Risk Applications can start today with Enkrypt AI Compliance platform. Enkrypt offers out-of-the-box support to build and monitor compliance for EU AI Act. See Figure 3 below.

Penalties for Non-Compliance
Non-compliant organizations are liable to up to €35 million as penalties. Here is a detailed breakdown (refer to Figure 4):
- Up to €35 million or 7% of global turnover for violating banned AI practices.
- Up to €15 million or 3% of global turnover for non-compliance with high-risk AI obligations.
- Up to €7.5 million or 1.5% of global turnover for failing to meet transparency requirements.

Action Plan: What Organizations Need to Do Now
To stay ahead of the EU AI Act deadlines, organizations should:
- Conduct an AI risk assessment to classify their AI systems.
- Develop AI governance policies to manage compliance responsibilities.
- Train teams on AI ethics and regulatory compliance to ensure smooth transitions.
- Engage with AI regulatory bodies to understand evolving compliance expectations.
- Implement AI transparency measures and ensure responsible AI deployment.
As enforcement tightens, organizations should be proactive about compliance. Is your organization ready for the AI compliance journey? Start preparing today and harness the power of AI securely with Enkrypt AI.
EU AI Act FAQ’s
To whom does the EU AI Act apply?
It applies to public and private entities inside and outside the EU if their AI systems impact people in the EU. Exemptions include research, prototyping, and military AI.
What are the EU AI risk categories?
The risk categories are as follows:
- Unacceptable Risk: AI uses violating fundamental rights (e.g., social scoring, manipulation, untargeted facial recognition).
- High Risk: AI systems affecting safety or rights (e.g., hiring, credit scoring, medical decisions).
- Transparency Risk: AI requiring clear disclosure (e.g., chatbots, deepfakes).
- Minimal Risk: Most AI systems, with no additional legal obligations.
How do I know if an AI system is high-risk according to the EU AI Act?
A system is high-risk if:
- It is a safety component of a regulated product (e.g., medical AI).
- It is listed in Annex III for sensitive applications (e.g., hiring, law enforcement, education).
What are a few examples of high-risk AI use cases for the EU AI Act?
- Critical infrastructure (e.g., traffic control).
- Education (e.g., grading systems).
- Employment (e.g., Resume screening).
- Law enforcement (e.g., risk assessments on suspects).
- Biometric identification (e.g., facial recognition).
What are the obligations for high-risk AI providers according to the EU AI Act?
- Conduct conformity assessments.
- Ensure data quality, transparency, and human oversight.
- Maintain compliance through audits and risk management.
- Register public-sector AI in a public database.
What role does standardization play according to the EU AI Act?
European standards will define compliance requirements. They will ensure consistency across AI regulations and provide a "presumption of conformity."
How must general-purpose AI models regulated according to the EU AI Act?
Providers must disclose model information, comply with copyright laws, and mitigate risks. Models using 10²⁵ FLOPs or more are deemed high-risk and must follow stricter rules.
Why is 10²⁵ FLOPs used for systemic risk AI?
It acts as a proxy for advanced AI capabilities. The threshold may change based on new measurement methods and technological advances.
Is the EU AI Act future-proof?
Yes, it is adaptable via delegated acts, industry standards, and frequent evaluations to keep up with AI advancements.
How does the EU AI Act regulate biometric identification?
- Real-time biometric identification in public spaces is banned, with limited law enforcement exceptions.
- Post-event identification requires judicial authorization.
- Authentication (e.g., unlocking phones) is not regulated.
How does the EU AI Act address bias?
High-risk AI must:
- Use diverse training data.
- Be auditable and traceable.
- Avoid unfair discrimination, especially against marginalized groups.
What is a fundamental rights impact assessment according to the EU AI Act??
Public bodies and certain businesses using high-risk AI (e.g., credit scoring) must assess and report potential rights impacts.
When does the EU AI Act take effect?
- February 2025: Prohibitions and AI literacy rules.
- August 2025: General-purpose AI rules.
- August 2026: Full implementation.
- August 2027: Certain high-risk AI obligations.
How will the EU AI Act be enforced?
National authorities oversee compliance, while the EU AI Office manages general-purpose AI. The EU AI Board coordinates regulation, and expert panels provide guidance.
What is the role of the European AI Board?
It ensures AI regulations are consistently applied across the EU and advises on policy.
What are the non-compliance penalties for EU AI Act?
- Up to €35 million or 7% of global turnover for violating banned AI practices.
- Up to €15 million or 3% of global turnover for non-compliance with high-risk AI obligations.
- Up to €7.5 million or 1.5% of global turnover for failing to meet transparency requirements.
How is the General-Purpose EU AI Code of Practice developed?
It is created with input from AI providers, industry groups, and regulators. The EU AI Office oversees the process.
Does the EU AI Act address environmental impact?
Yes, providers must report energy consumption, and systemic risk AI must assess energy efficiency. Standardized reporting methods will be developed.
How does the EU AI Act support innovation?
It builds trust in AI, increases legal certainty, and enables regulatory sandboxes for real-world AI testing under controlled conditions.
What is the EU AI Pact?
A voluntary initiative encouraging early compliance with EU AI Act rules through collaboration, knowledge sharing, and best practices.