Key EU AI Act Deadlines
The EU AI Act – Regulation (EU) 2024/1689 – effective from 1 August 2024 and in application 2 August 2026.
Chapter I and II including AI Literacy and Prohibitions are in application 2 February 2025.
Chapter V GPAI requirements are in application 2 August 2025 (enforced 2 August 2026)
Find out about our EU AI Act pre-compliance services
Explore our EU AI Act Resource Centre
Anekanta Consulting welcomes the EU proposals released today
We believe that the new regulation will provide clarity to developers and users of AI based systems and will hold all stakeholders to account for the impact of AI when used in high risk scenarios.
The EU is laying down harmonised rules on artificial intelligence which will also amend certain Union Legislative Acts
Against this political context, the Commission puts forward the proposed regulatory framework on Artificial Intelligence with the following specific objectives:
Source: Extract from the new proposal
• ensure that AI systems placed on the Union market and used are safe and respect existing law on fundamental rights and Union values;
• ensure legal certainty to facilitate investment and innovation in AI;
• enhance governance and effective enforcement of existing law on fundamental rights and safety requirements applicable to AI systems;
• facilitate the development of a single market for lawful, safe and trustworthy AI applications and prevent market fragmentation.
What this means for AI providers?
The new proposal which still has to go through the EU legislative process sets out a number of new requirements which must be met before AI based products can be placed onto the market in EU.
We have provided a brief insight into a number of the requirements:
A new AI CE marking based on meeting a number of criteria including detailed technical documentation, transparency around training data sets, detailed descriptions of how the AI makes decisions and logging of the AI decisions.
Ongoing market monitoring which includes continuing assessments and close scrutiny of AI which learns, to ensure that the learning does not introduce bias.
An outright ban on AI which manipulates consumer choices or opinion subliminally, AI which exploits vulnerabilities and which evaluates trustworthiness through the analysis of social behaviour or personality characteristics observed over time.
Highly restricted use of biometrics based identification systems such as automated facial recognition by law enforcement in publicly accessible places, unless it is explicitly authorised by the governing authority and that it is for a very specific purpose.
The new regulations will take priority over certain clauses in the EU Directive 2016/680 for Law Enforcement.
The new proposed regulations are risk based rather than being prescriptive about AI technologies.
High-risk AI categories include:
1. Biometric identification and categorisation of natural persons
2. Management and operation of critical infrastructure
3. Education and vocational training
4. Employment, workers management and access to self-employment
5. Access to and enjoyment of essential private services and public services and benefits
6. Law enforcement
7. Migration, asylum and border control management
8. Administration of justice and democratic processes
If AI is trained using biased historical data it is likely to produce biased recommendations. We see the new proposed regulations as a start point towards preventing the amplification of societal and institutional bias.
We will be providing advice and guidance to our AI developer and user clients over the coming weeks.
If you have any questions please contact us at ask@anekanta.co.uk or fill in our contact form here.

Anekanta®AI and Anekanta®Consulting
AI Strategy | Risk | Literacy | Governance
Contact us | Explore our services | Subscribe to our newsletter | Follow us on LinkedIn
Intellectual Property: © 2016–2026 Anekanta®. All rights reserved. Unless otherwise expressly stated, all materials published on this website, including the Anekanta® AI Governance Framework for Boards, the 12 Principles, and all AI risk and impact evaluation methodologies, software, models, diagrams, text and materials, are proprietary intellectual property of Anekanta®. No reproduction, adaptation, distribution, or commercial exploitation is permitted without prior written authorisation. No rights are granted other than those expressly stated. The Anekanta® AI Governance Framework and 12 Principles are developed, maintained and continuously enhanced as part of Anekanta®’s proprietary governance architecture.
Professional Disclaimer: The information provided on this website is for general informational purposes only and does not constitute legal, regulatory, financial or professional advice. Any reliance placed on the information is strictly at the user’s own risk. Professional advice should be sought in relation to specific circumstances through a formal engagement with Anekanta®.
Use of Generative AI: Generative AI tools may be utilised in research and drafting processes. All published materials are subject to substantive human review, professional judgment and oversight prior to release.

1 comment