EU AI Act – Risk and Compliance Services
The European Union’s AI Act (Regulation (EU) 2024/1689) represents a significant shift in the regulatory landscape for AI systems, introducing rigorous requirements that vary depending on the type of organisation, AI use case, and associated risk category. For organisations involved in the AI value chain—from developers to end-users—understanding and meeting these compliance obligations is not just a legal necessity, but a critical step toward responsible and ethical AI deployment.
At Anekanta®, we specialise in guiding organisations through this complex regulatory environment. Whether you’re developing high-risk AI systems or integrating AI into your operations and would like to start with the ISO/IEC 42001 AI Management System, our team of experts is here to support you every step of the way. From assessing your value chain relationships to ensuring your systems meet the legal definition of AI, we provide the insights and tools needed to achieve full compliance and safeguard your organisation’s reputation.
Insights into the actions needed now and short term
- Introduction
- Legal definition of AI
- How to determine whether a software system is AI or not?
- Who is in the AI system value chain?
- When is the Act enforced?
- What should be done now?
- How Anekanta® supports the AI value chain ecosystem
Introduction
We work with many organisations from large, SME, to Government and NGOs to help them understand their AI risks. A common unknown risk for them is that they do not know whether they are using AI systems or not. This is due to a number of factors, but the two most critical in the EU AI Act context are:
- The Boards and senior management teams do not have sufficient knowledge of technology for example, understanding LLMs and generative AI systems. They may not appreciate the language of their technical teams and as a result, ignore the problem or fail to recognise risk, or that it is their responsibility to ensure legal and responsible AI governance.
- Secondly, many organisations do not have a clear and consistent understanding of what AI is; developers have different views about what constitutes an AI system, some deny it exists, others claim it is machine learning and not AI. These perspectives may be correct in the context, however a unified understanding is crucial to the effective development and management of AI systems, legally and ethically.
Legal definition of AI
The EU AI Act aims to ends ambiguity about whether a system is AI through the legal definition of an AI system.
Legal definition of AI Article 3(1)1
‘AI system’ means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments;
How to determine whether a software system is AI?
To help organisations we have developed a number of evaluation systems which are used in-house by our experienced research team who are deeply conversant in AI technologies. Our systems may use Gen AI/LLMs to ingest AI system technical documents to determine AI system characteristics, however, the vendor documentation often does not reveal the full characteristics, because there has been no standard or legal requirement for AI system functions to be disclosed in commercial documentation. Model cards are beginning to emerge, which if produced in a consistent way from one vendor to the next, it will be possible to achieve more accurate automated results. In the mean time, our team’s specialised AI technical knowledge combined with commercial and academic research skills, interpret the documentation and functions of both ‘classic’ and ‘generative’ AI systems.
The EU AI Act requires a consistent structure to the disclosure of technical information about the AI systems in which case in the next year or so, we expect it will be easier for developers and users of AI systems to determine whether systems are AI per the definition in law.
Where are you in the AI system value chain?
There are different requirements dependent upon where an organisation resides in the value chain. The definitions of the value chain actors are provided below. If an organisation is classified as an AI system value chain actor, they should start their EU AI Act compliance journey as soon as possible.
‘provider’
a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge;
‘deployer’
any natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity;
‘authorised representative’
a natural or legal person located or established in the Union who has received and accepted a written mandate from a provider of an AI system or a general-purpose AI model to, respectively, perform and carry out on its behalf the obligations and procedures established by this Regulation;
‘importer’
a natural or legal person located or established in the Union that places on the market an AI system that bears the name or trademark of a natural or legal person established in a third country;
‘distributor’
a natural or legal person in the supply chain, other than the provider or the importer, that makes an AI system available on the Union market;
‘operator’
a provider, product manufacturer, deployer, authorised representative, importer or distributor;
Entry into force and application
The Regulation 2024/1689 “EU AI Act” is in force as of 1 August 2024 and the applicability of the provisions are set out in Article 113. [comments added]
“Article 113
Entry into force and application
This Regulation shall enter into force on the twentieth day following that of its publication [12th July 2024] in the Official Journal of the European Union.
It shall apply from 2 August 2026. However:
(a) Chapters I and II shall apply from 2 February 2025; [AI Literacy and Prohibitions]
(b) Chapter III Section 4, Chapter V, Chapter VII and Chapter XII and Article 78 shall apply from 2 August 2025, with the exception of Article 101; [largely the high-risk AI provisions]
(c) Article 6(1) and the corresponding obligations in this Regulation shall apply from 2 August 2027.
This Regulation shall be binding in its entirety and directly applicable in all Member States.
Done at Brussels, 13 June 2024.”
When is the Act Applicable and when is it enforced?
The enforcement of the Act will occur in stages dependent upon the applicability of the Articles. Application of the prohibitions and AI Literacy requirements commenced February 2025. Anekanta® has developed an AI Literacy and Governance training programme for Boards which is aligned with the requirements of Article 4, find out more. Furthermore, our AI risk Intelligence System™ and use case evaluation services will guide your high-risk and prohibited systems towards lower risk categories and help you to meet the compliance requirements.
If your systems are in the prohibited category, contact us now and we may be able to help you to avoid a ban. High-risk AI system value chain actors should move forward as soon as possible to implement Responsible AI practice throughout their development, integration, procurement, operational teams, 3rd party providers and governing body. The requirements and enforcement for high-risk AI systems are applicable on 2nd August 2026, two years after the law is in force
What should be done now?
The design, development, integration and testing cycle can be lengthy for new technology. From an idea on the bench progressing through commercialisation and into a global channel can take 2-3 years dependent on the product or service and whether hardware development is involved. This is arguably faster when the AI system is a system of joined systems notionally operating through a cloud based provider.
Additionally organisations may already be using AI systems without oversight due to the way they have evolved commercially without any regulatory guardrails. There has been no consistency in the language and terminology used to describe functionality, and some vendors do not communicate AI techniques at all as a commercial decision to allow a focus on other aspects of system functionality.
Due to the long cycle of review, we recommend that you do not delay. Furthermore, larger organisations often have long and complex decision making cycles which involve many stakeholders some of whom may need to get up to speed.
How Anekanta® supports the AI value chain ecosystem
There are number of ways in which Anekanta® supports the entire AI system value chain from the developer, through to the end user.
✔ Assess the value chain relationships
✔ Determine whether the AI systems meet the legal definition
✔ Determine the risk category of AI systems
✔ Legally and ethically evaluate the use cases and identify the ‘red-line’ of prohibition
✔ We specialise in Chapter III, Article 6/Annex III high-risk classification and requirements
✔ Guide fundamental rights impact risk assessments
✔ Support ‘governing body’ level policy making
✔ De-risk and develop product road maps to compliance
✔ Support and train developers to design-in trustworthy principles
✔ Preparation for a ISO/IEC 42001 AI Management System (the foundations of the EU AI Act QMS)
✔ Meet all legal/conformity requirements
✔ Create compliant technical documentation and AI Literacy programmes and more
The EU AI Act compliance journey starts with Anekanta® and we are alongside our customers all the way.
Please get in touch for more information about our full suite of services, which will lead you through your EU AI Act compliance process from inception through to implementation.

Anekanta® is your partner for AI risk assessment, classification and conformity assessment preparation. Contact us today and let’s start the conversation.

Anekanta®AI and Anekanta®Consulting
AI Strategy | Risk | Literacy | Governance
Contact us | Explore our services | Subscribe to our newsletter | Follow us on LinkedIn
Intellectual Property: © 2016–2026 Anekanta®. All rights reserved. Unless otherwise expressly stated, all materials published on this website, including the Anekanta® AI Governance Framework for Boards, the 12 Principles, and all AI risk and impact evaluation methodologies, software, models, diagrams, text and materials, are proprietary intellectual property of Anekanta®. No reproduction, adaptation, distribution, or commercial exploitation is permitted without prior written authorisation. No rights are granted other than those expressly stated. The Anekanta® AI Governance Framework and 12 Principles are developed, maintained and continuously enhanced as part of Anekanta®’s proprietary governance architecture.
Professional Disclaimer: The information provided on this website is for general informational purposes only and does not constitute legal, regulatory, financial or professional advice. Any reliance placed on the information is strictly at the user’s own risk. Professional advice should be sought in relation to specific circumstances through a formal engagement with Anekanta®.
Use of Generative AI: Generative AI tools may be utilised in research and drafting processes. All published materials are subject to substantive human review, professional judgment and oversight prior to release.
- Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence Available at: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=OJ:L_202401689 Accessed 12th July 2024 ↩︎
