Key EU AI Act Deadlines
The EU AI Act – Regulation (EU) 2024/1689 – effective from 1 August 2024 and in application 2 August 2026.
Chapter I and II including AI Literacy and Prohibitions are in application 2 February 2025.
Chapter V GPAI requirements are in application 2 August 2025 (enforced 2 August 2026)
Find out about our EU AI Act pre-compliance services
Explore our EU AI Act Resource Centre
The trilogue negotiators reached a provisional agreement on the EU Artificial Intelligence Act on 9th December 2023.
The EU AI Act will become Law in all EU Member states early 2024. Certain AI systems will have to comply sooner than the 24 months grace period.
The main new elements of the EU AI Act are:
✔ rules on high-impact general-purpose AI models that can cause systemic risk in the future, as well as on high-risk AI systems
✔ a revised system of governance with some enforcement powers at EU level extension of the list of prohibitions but with the possibility to use remote biometric identification by law enforcement authorities in public spaces, subject to safeguards
✔ better protection of rights through the obligation for deployers of high-risk AI systems to conduct a fundamental rights impact assessment prior to putting an AI system into use.
✔ the definition aligns with the approach proposed by the OECD distinguishing AI systems from simpler software systems
A wide range of high-risk AI systems would be authorised, but subject to a set of requirements and obligations to gain access to the EU market.
✔ use of real-time remote biometric identification systems in publicly accessible spaces for law enforcement purposes should be exceptionally allowed.
✔ a fundamental rights impact assessment is required before a high-risk AI system is deployed.
✔ increased transparency will be required regarding the use of high-risk AI systems.
✔ public entities will also be obliged to register in the EU database for high-risk AI systems.
✔ an obligation for users of an emotion recognition system to inform natural persons when they are being exposed to such a system.
Since AI systems are developed and distributed through complex value chains, the compromise agreement includes:
✔ changes to allocation of responsibilities and roles of the various actors in value chains (providers and users).
Where risk is deemed unacceptable these systems will be banned from the EU.
✖ cognitive behavioural manipulation,
✖ the untargeted scraping of facial images from the internet or CCTV footage,
✖ emotion recognition in the workplace and educational institutions,
✖ social scoring,
✖ biometric categorisation to infer sensitive data, such as sexual orientation or religious beliefs, and some cases of predictive policing for individuals.
Rules for foundation models.
✔ foundation models must comply with specific transparency obligations before they are placed in the market.
✔ a stricter regime was introduced for ‘high impact’ foundation models which can disseminate systemic risks along the value chain.
(Source: https://www.consilium.europa.eu/en/press/press-releases/2023/12/09/artificial-intelligence-act-council-and-parliament-strike-a-deal-on-the-first-worldwide-rules-for-ai/)
Note that some AI systems will have to comply late 2024.
Anekanta®AI evaluates your contextual use case, risk categorization, fundamental rights impact, value chain risk and more.
To find out how to comply with the Act get in touch.
Anekanta®AI and Anekanta®Consulting
AI Strategy | Risk | Literacy | Governance
Contact us | Explore our services | Subscribe to our newsletter | Follow us on LinkedIn
Intellectual Property: © 2016–2026 Anekanta®. All rights reserved. Unless otherwise expressly stated, all materials published on this website, including the Anekanta® AI Governance Framework for Boards, the 12 Principles, and all AI risk and impact evaluation methodologies, software, models, diagrams, text and materials, are proprietary intellectual property of Anekanta®. No reproduction, adaptation, distribution, or commercial exploitation is permitted without prior written authorisation. No rights are granted other than those expressly stated. The Anekanta® AI Governance Framework and 12 Principles are developed, maintained and continuously enhanced as part of Anekanta®’s proprietary governance architecture.
Professional Disclaimer: The information provided on this website is for general informational purposes only and does not constitute legal, regulatory, financial or professional advice. Any reliance placed on the information is strictly at the user’s own risk. Professional advice should be sought in relation to specific circumstances through a formal engagement with Anekanta®.
Use of Generative AI: Generative AI tools may be utilised in research and drafting processes. All published materials are subject to substantive human review, professional judgment and oversight prior to release.
