Key EU AI Act Deadlines
The EU AI Act – Regulation (EU) 2024/1689 – effective from 1 August 2024 and in application 2 August 2026.
Chapter I and II including AI Literacy and Prohibitions are in application 2 February 2025.
Chapter V GPAI requirements are in application 2 August 2025 (enforced 2 August 2026)
Find out about our EU AI Act pre-compliance services
Explore our EU AI Act Resource Centre
The EU AI Act took another step forward
Spain has taken up the baton through the Presidency of the EU Council to see the EU AI Act through its negotiation phase
The EU AI Act is relevant to all sectors and domains in which we operate from security to smart cities and buildings, transportation, aviation, retail and so on.
AI generated outputs used within the EU, whether the software resides in the EU or not, render it subject to the Act.
If you have just started tracking global AI regulatory developments and you are concerned about the impact of the EU AI Act on your organisation and its activites, the latest texts are:
The foundational wording agreed in December 2022, known as the ‘compromise text’ (200+ pages) plus over 700 amendments which were agreed in June 2023 (300+ pages).
The key areas in focus during the current phase are:
1. the definition of AI
2. high-risk AI classifications
3. high-risk AI use cases
4. fundamental rights impact assessments
Unpacking these developments, Anekanta® may help your organisation in a number of ways:
1. the definition of AI
Anekanta® have created a framework which determines whether software is AI or not. Without this understanding, the board may not be equipped to assess risk.
2. high-risk AI classifications
Anekanta® have classified all relevant AI systems within our sectors. Without this you may not know whether AI is in use in your business or whether likely to be subject to the EU AI Act. Uncertainty increases the risk of misclassification which under the Act attracts penalties if not remedied in a timely manner.
3. high-risk AI use cases
Anekanta® research, analyse and determine high-risk use cases and recommend the mitigations required. The Act sets out not just the types of AI systems which present higher risks, it provides examples of specific use cases which are considered to present the greatest risks to the health, safety and fundamental rights of people.
4. fundamental rights impact assessments
Anekanta® have developed an automated high-level fundamental rights impact assessment already in use by our global client base. A ‘super-charged’ data protection impact assessment which is a requirement for all high-risk implementations.
Anekanta®AI and Anekanta®Consulting are established in the field of high-risk AI research, impact and risk evaluation. We focus on legislative frameworks, standards and fundamental rights.
To find out how we may assist your organisation, increase opportunities and reduce risk, please get in touch. We work confidentially under NDA.
Anekanta®AI and Anekanta®Consulting
AI Strategy | Risk | Literacy | Governance
Contact us | Explore our services | Subscribe to our newsletter | Follow us on LinkedIn
Intellectual Property: © 2016–2026 Anekanta®. All rights reserved. Unless otherwise expressly stated, all materials published on this website, including the Anekanta® AI Governance Framework for Boards, the 12 Principles, and all AI risk and impact evaluation methodologies, software, models, diagrams, text and materials, are proprietary intellectual property of Anekanta®. No reproduction, adaptation, distribution, or commercial exploitation is permitted without prior written authorisation. No rights are granted other than those expressly stated. The Anekanta® AI Governance Framework and 12 Principles are developed, maintained and continuously enhanced as part of Anekanta®’s proprietary governance architecture.
Professional Disclaimer: The information provided on this website is for general informational purposes only and does not constitute legal, regulatory, financial or professional advice. Any reliance placed on the information is strictly at the user’s own risk. Professional advice should be sought in relation to specific circumstances through a formal engagement with Anekanta®.
Use of Generative AI: Generative AI tools may be utilised in research and drafting processes. All published materials are subject to substantive human review, professional judgment and oversight prior to release.
Subscribe to our updates by email
Your email will not be shared with 3rd parties and is processed in accordance with our Privacy Policy, the UK DPA 2018, GDPR and the CCPA. You can unsubscribe at any time by clicking ‘unsubscribe’ in our emails. We may send you curated content and newsletters in the future.
