Unpacking the EU AI Act Negotiations

The EU AI Act took another step forward

Spain has taken up the baton through the Presidency of the EU Council to see the EU AI Act through its negotiation phase

The EU AI Act is relevant to all sectors and domains in which we operate from security to smart cities and buildings, transportation, aviation, retail and so on.

AI generated outputs used within the EU, whether the software resides in the EU or not, render it subject to the Act.

If you have just started tracking global AI regulatory developments and you are concerned about the impact of the EU AI Act on your organisation and its activites, the latest texts are:

The foundational wording agreed in December 2022, known as the ‘compromise text’ (200+ pages) plus over 700 amendments which were agreed in June 2023 (300+ pages).

The key areas in focus during the current phase are:

1. the definition of AI
2. high-risk AI classifications
3. high-risk AI use cases
4. fundamental rights impact assessments

Unpacking these developments, Anekanta® may help your organisation in a number of ways:

1. the definition of AI

Anekanta® have created a framework which determines whether software is AI or not. Without this understanding, the board may not be equipped to assess risk.

2. high-risk AI classifications

Anekanta® have classified all relevant AI systems within our sectors. Without this you may not know whether AI is in use in your business or whether likely to be subject to the EU AI Act. Uncertainty increases the risk of misclassification which under the Act attracts penalties if not remedied in a timely manner.

3. high-risk AI use cases

Anekanta® research, analyse and determine high-risk use cases and recommend the mitigations required. The Act sets out not just the types of AI systems which present higher risks, it provides examples of specific use cases which are considered to present the greatest risks to the health, safety and fundamental rights of people.

4. fundamental rights impact assessments

Anekanta® have developed an automated high-level fundamental rights impact assessment already in use by our global client base. A ‘super-charged’ data protection impact assessment which is a requirement for all high-risk implementations.

Anekanta®AI and Anekanta®Consulting are established in the field of high-risk AI research, impact and risk evaluation. We focus on legislative frameworks, standards and fundamental rights.

To find out how we may assist your organisation, increase opportunities and reduce risk, please get in touch. We work confidentially under NDA.

Anekanta®AI Logo

We would love to hear from you:
Call us on +44 020 3923 0230
Email: ask@anekanta.co.uk
Fill out our contact form

Subscribe to our updates by email

Your email will not be shared with 3rd parties and is processed in accordance with our Privacy Policy, the UK DPA 2018, GDPR and the CCPA. You can unsubscribe at any time by clicking ‘unsubscribe’ in our emails. We may send you curated content and newsletters in the future.


Explore our new and insights

  • Background to Anekanta®’s work on the ethical use of facial recognition technology
    Anekanta®AI have developed a number of independent, vendor agnostic ethical AI governance frameworks expertly designed to assess the safety of high-risk AI development and deployment and its impact on stakeholders. Here is the background to our work in this area.
  • Anekanta® recognised for ethical and responsible AI development and implementation
    We are proud to announce the release of a ground breaking new British Standard “Facial recognition technology – Ethical use and deployment in video surveillance-based systems – Code of practice (BS 9347:2024)” Anekanta® Privacy Impact Risk Assessment System™, de-risks FRT use cases anywhere in the world and recommends the new standard for UK mitigation purposes.
  • Examining the ‘cost-benefit’ of adopting Responsible AI within your business
    By adopting Responsible AI practice, businesses can unlock the transformative power of AI technology while minimizing risks and maintaining stakeholder trust. Anekanta® recommends that organisations start their AI policy and AI Management System Implementation (BS ISO/IEC 42001) alongside legal (EU AI Act) and regulatory considerations dependent on the regions where the business operates or uses AI decisions.
  • What does the new legal definition of AI systems mean in practice for commercial use cases?
    Organisations may not be certain whether their systems are AI due to the range of contrasting views from vendors and buyers. It is important that they recognise and categorise these systems as soon as possible. Developers, providers and user/deployers of AI systems which meet the EU AI Act definition criteria, and whose decisions are utilised within the EU, have to comply with the law.
  • Anekanta® Responsible AI frameworks featured by the OECD.AI Policy Observatory
    Anekanta® Responsible AI frameworks have been featured in the UK Government Resonsible Technology Adoption Unit (formerly CDEI) Portfolio of AI Assurance Techniques, and now we are announcing that we are published in the OECD.AI Catalogue of Tools and Metrics for Trustworthy AI. Find out more.