The EU AI Act is in force – how will industry prepare for its requirements?

Does the EU AI Act Apply to Your AI Systems?

If you operate within the AI value chain – developing, supplying, integrating or deploying AI systems used in Europe you will have to comply with the EU AI Act.

As artificial intelligence (AI) continues to evolve and integrate into various sectors, the European Union has taken significant steps to ensure its safe and ethical use. The new EU regulation on AI, designed to prevent harms such as discrimination and uphold the trustworthiness of AI systems, is a critical development for all stakeholders in the AI value chain. Whether you are developing, supplying, integrating, or deploying AI systems in Europe, or your systems impact decisions made within the region, this regulation directly affects your operations. With a focus on transparency, conformity, and responsible AI practices, the regulation provides clear guidelines for managing AI risk, particularly for high-risk AI systems. In this article, we delve into the specifics of the new law, outline key actions for compliance, and introduce the ISO/IEC 42001 AI Management System Standard—a framework that ensures AI risk is effectively managed within a robust governance structure.

Why did the EU legislate?

The EU have legislated to address the evidence of AI harms experienced in Europe, such as self-learning discrimination revealed by Amnesty International in their report ‘Xenophobic Machines’ – through a flexible framework which reduces the risk of harms occurring in the future.

The new law sets out guardrails to ensure the trustworthiness of AI systems through conformity for high-risk AI and a range of measures to ensure transparency during interaction.

Origins of the OECD Trustworthy AI Principles

Most AI systems, including Gen AI are in the transparency or minimal risk categories. These categories encourage the adoption of the OECD.AI trustworthy AI principles (adopted by the EU high level experts group) embodied in the recitals. The seven principles form the foundations of Responsible AI practice.

Extracts from Recital (27)

While the risk-based approach is the basis for a proportionate and effective set of binding rules, it is important to recall the 2019 Ethics guidelines for trustworthy AI developed by the independent AI HLEG appointed by the Commission. In those guidelines, the AI HLEG developed seven non-binding ethical principles for AI which are intended to help ensure that AI is trustworthy and ethically sound. The seven principles include human agency and oversight; technical robustness and safety; privacy and data governance; transparency; diversity, non-discrimination and fairness; societal and environmental well-being and accountability.

The application of those principles should be translated, when possible, in the design and use of AI models. They should in any case serve as a basis for the drafting of codes of conduct under this Regulation. All stakeholders, including industry, academia, civil society and standardisation organisations, are encouraged to take into account, as appropriate, the ethical principles for the development of voluntary best practices and standards.

Regulation (EU) 2024/1689 Full recital (27) available at https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=OJ:L_202401689


For high-risk AI, the law has codified Responsible AI through the Articles in Chapter III and Annex III (in addition to other references). For the high-risk category, we recommend a number of actions which should commence as soon as possible.

Your Initial Actions

✔ Assess the AI value chain relationships
✔ Determine whether the AI systems meet the legal definition
✔ Determine the risk category of AI systems and are they safety products
✔ Legally and ethically evaluate the use cases
✔ De-risk and develop product road maps to compliance
Prepare for the ISO/IEC 42001 AI Management System (the foundations of the EU AI Act QMS)

AI Quality Management System (QMS)

The new British and International AI Management System Standard BS ISO/IEC 42001:2023 sets out the controls required to ensure AI risk is evaluated and its use is reviewed, managed and monitored within an overarching Responsible AI governance framework set in AI policy, with overarching board accountability. The British Standards Institute launched the standard in 2024; the launch webinar and a range of useful information resources can be found here.

Preparations towards the standard can be undertaken now starting with an evaluation of all AI systems planned or in use within the organisation – against the legal definition of AI set out in the Act. Anekanta® specialise in the responsible use of high-risk AI from biometrics through to Gen AI. We have developed a range of risk frameworks designed to discover the AI systems in use or in development which underpin our range of services.

A new QMS specifically designed to meet the requirements of the EU AI Act is aligned with much of ISO/IEC 42001 and will be released in due course. It will become a harmonised standard which contributes towards a presumption of conformity of other relevant requirements are also met dependent upon the category of the AI system and the conformity assessment route defined in the EU AI Act Article 43. Please get in touch if you are preparing for high-risk conformity assessments.

Specialised Industry and Sector Skills Required

To effectively identify the AI systems in use, specialised AI system skills and knowledge – with an understanding of the commercial embodiment of AI in industrial systems, sectors and complex value chains – are required. This is due to the previous lack of disclosure of AI system features in technical documentation. The Act requires disclosure to meet conformity requirements. This will make procurement easier and reduce the risk of downstream cost and impact.

We can help

To find out how Anekanta® can help your organisation prepare for the Act’s requirements – explore our EU AI Act Compliance Services or please get in touch and we will be happy to discuss your needs.

Anekanta® specialise in AI Strategy, Risk, Literacy and Governance. Since 2019 our globally recognised AI risk and governance frameworks have been helping our clients choose and use AI responsibly to enhance their organisations’ success.

Anekanta®AI and Anekanta®Consulting
AI Strategy | Risk | Literacy | Governance
Contact us | Explore our services | Subscribe to our newsletter | Follow us on LinkedIn

Intellectual Property: © 2016–2026 Anekanta®. All rights reserved. Unless otherwise expressly stated, all materials published on this website, including the Anekanta® AI Governance Framework for Boards, the 12 Principles, and all AI risk and impact evaluation methodologies, software, models, diagrams, text and materials, are proprietary intellectual property of Anekanta®. No reproduction, adaptation, distribution, or commercial exploitation is permitted without prior written authorisation. No rights are granted other than those expressly stated. The Anekanta® AI Governance Framework and 12 Principles are developed, maintained and continuously enhanced as part of Anekanta®’s proprietary governance architecture.
Professional Disclaimer: The information provided on this website is for general informational purposes only and does not constitute legal, regulatory, financial or professional advice. Any reliance placed on the information is strictly at the user’s own risk. Professional advice should be sought in relation to specific circumstances through a formal engagement with Anekanta®.
Use of Generative AI: Generative AI tools may be utilised in research and drafting processes. All published materials are subject to substantive human review, professional judgment and oversight prior to release.

Exit mobile version
%%footer%%