EU AI Act: Cleared to move to the next stage. Your action list to get started.

Organisations are urged to review their development road maps and current use cases to determine whether their AI systems sit in the prohibited and high risk categories.

Who should take action to classify their AI Systems?

If you are a: “‘provider’ means a natural or legal person, public authority, agency or other body that develops an AI system or a general purpose AI model or that has an AI system or a general purpose AI model developed and places them on the market or puts the system into service under its own name or trademark, whether for payment or free of charge;”*

or a: “‘deployer means any natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity;”*

And your AI system decisions are used in the EU, this is for you.

*Source: EU AI Act February 2024

When do you need to take action?

If you are a provider or deployer of a prohibited system, you will have to comply within 6 months and 20 days of the Act being published in the official journal: this could be as soon as November 2024.

High-risk AI system providers and deployers should move forward as soon as possible to implement Responsible AI practice throughout their development, integration, procurement, operational teams, 3rd party providers and governing body.

What do you need to do now?

The design, development, integration and testing cycle can be lengthy for new technology. From an idea on the bench progressing through commercialization and into a global channel can take 2-3 years dependent on the product or service and whether hardware development is involved. This is arguably faster when the AI system is a system of joined systems notionally operating through a cloud based provider.

Additionally organisations may already be using AI systems without oversight due to the way they have evolved commercially without any regulatory guardrails. There has been no consistency in the language and terminology used to describe functionality, and some vendors do not communicate AI techniques at all as a commercial decision to allow a focus on other aspects of system functionality.

Due to the long cycle of review, we recommend that you do not delay. Furthermore, larger organizations have long and complex decision making cycles which involve many stakeholders some of whom may need to get up to speed.

We have set out some the actions (non-exaustive) which will set you on the right path:

✔ Determine the risk category of your AI system
✔ Set a governing body level policy
✔ Develop a road map to compliance
✔ Design in trustworthy principles
✔ Implement the ISO/IEC 42001 AI Management System
✔ Meet all legal/conformity requirements
✔ Create compliant technical documentation

We provide the full suite of services, underpinned by our risk intelligence systems built on OECD principles, EU AI Act and all related regulations, and which will lead you through this process from inception through to implementation.

Contact us for further information.

Anekanta®AI Logo

We would love to hear from you:
Call us on +44 020 3923 0230
Email: ask@anekanta.co.uk
Fill out our contact form


Explore new and latest insights

  • Anekanta® recognised for ethical and responsible AI development and implementation
    We are proud to announce the release of a ground breaking new British Standard “Facial recognition technology – Ethical use and deployment in video surveillance-based systems – Code of practice (BS 9347:2024)” Anekanta® Privacy Impact Risk Assessment System™, de-risks FRT use cases anywhere in the world and recommends the new standard for UK mitigation purposes.
  • Examining the ‘cost-benefit’ of adopting Responsible AI within your business
    By adopting Responsible AI practice, businesses can unlock the transformative power of AI technology while minimizing risks and maintaining stakeholder trust. Anekanta® recommends that organisations start their AI policy and AI Management System Implementation (BS ISO/IEC 42001) alongside legal (EU AI Act) and regulatory considerations dependent on the regions where the business operates or uses AI decisions.
  • What does the new legal definition of AI systems mean in practice for commercial use cases?
    Organisations may not be certain whether their systems are AI due to the range of contrasting views from vendors and buyers. It is important that they recognise and categorise these systems as soon as possible. Developers, providers and user/deployers of AI systems which meet the EU AI Act definition criteria, and whose decisions are utilised within the EU, have to comply with the law.
  • Anekanta® Responsible AI frameworks featured by the OECD.AI Policy Observatory
    Anekanta® Responsible AI frameworks have been featured in the UK Government Resonsible Technology Adoption Unit (formerly CDEI) Portfolio of AI Assurance Techniques, and now we are announcing that we are published in the OECD.AI Catalogue of Tools and Metrics for Trustworthy AI. Find out more.
  • EU AI Act: Cleared to move to the next stage. Your action list to get started.
    Organisations are urged to review their development road maps and current use cases to determine whether their AI systems sit in the prohibited and high risk categories

We'd love to hear your views.....

This site uses Akismet to reduce spam. Learn how your comment data is processed.