Key EU AI Act Deadlines
The EU AI Act – Regulation (EU) 2024/1689 – effective from 1 August 2024 and in application 2 August 2026.
Chapter I and II including AI Literacy and Prohibitions are in application 2 February 2025.
Chapter V GPAI requirements are in application 2 August 2025 (enforced 2 August 2026)
Find out about our EU AI Act pre-compliance services
Explore our EU AI Act Resource Centre
Organisations are urged to review their development road maps and current use cases to determine whether their AI systems sit in the prohibited and high risk categories.
Who should take action to classify their AI Systems?
If you are a: “‘provider’ means a natural or legal person, public authority, agency or other body that develops an AI system or a general purpose AI model or that has an AI system or a general purpose AI model developed and places them on the market or puts the system into service under its own name or trademark, whether for payment or free of charge;”*
or a: “‘deployer means any natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity;”*
And your AI system decisions are used in the EU, this is for you.
*Source: EU AI Act February 2024
When do you need to take action?
If you are a provider or deployer of a prohibited system, you will have to comply within 6 months and 20 days of the Act being published in the official journal: this could be as soon as November 2024.
High-risk AI system providers and deployers should move forward as soon as possible to implement Responsible AI practice throughout their development, integration, procurement, operational teams, 3rd party providers and governing body.
What do you need to do now?
The design, development, integration and testing cycle can be lengthy for new technology. From an idea on the bench progressing through commercialization and into a global channel can take 2-3 years dependent on the product or service and whether hardware development is involved. This is arguably faster when the AI system is a system of joined systems notionally operating through a cloud based provider.
Additionally organisations may already be using AI systems without oversight due to the way they have evolved commercially without any regulatory guardrails. There has been no consistency in the language and terminology used to describe functionality, and some vendors do not communicate AI techniques at all as a commercial decision to allow a focus on other aspects of system functionality.
Due to the long cycle of review, we recommend that you do not delay. Furthermore, larger organizations have long and complex decision making cycles which involve many stakeholders some of whom may need to get up to speed.
We have set out some the actions (non-exaustive) which will set you on the right path:
✔ Determine the risk category of your AI system
✔ Set a governing body level policy
✔ Develop a road map to compliance
✔ Design in trustworthy principles
✔ Implement the ISO/IEC 42001 AI Management System
✔ Meet all legal/conformity requirements
✔ Create compliant technical documentation
We provide the full suite of services, underpinned by our risk intelligence systems built on OECD principles, EU AI Act and all related regulations, and which will lead you through this process from inception through to implementation.
Contact us for further information.

Anekanta®AI and Anekanta®Consulting
AI Strategy | Risk | Literacy | Governance
Contact us | Explore our services | Subscribe to our newsletter | Follow us on LinkedIn
Intellectual Property: © 2016–2026 Anekanta®. All rights reserved. Unless otherwise expressly stated, all materials published on this website, including the Anekanta® AI Governance Framework for Boards, the 12 Principles, and all AI risk and impact evaluation methodologies, software, models, diagrams, text and materials, are proprietary intellectual property of Anekanta®. No reproduction, adaptation, distribution, or commercial exploitation is permitted without prior written authorisation. No rights are granted other than those expressly stated. The Anekanta® AI Governance Framework and 12 Principles are developed, maintained and continuously enhanced as part of Anekanta®’s proprietary governance architecture.
Professional Disclaimer: The information provided on this website is for general informational purposes only and does not constitute legal, regulatory, financial or professional advice. Any reliance placed on the information is strictly at the user’s own risk. Professional advice should be sought in relation to specific circumstances through a formal engagement with Anekanta®.
Use of Generative AI: Generative AI tools may be utilised in research and drafting processes. All published materials are subject to substantive human review, professional judgment and oversight prior to release.



