AI may manifest in various forms – how to identify it?

Group of blindfolded people

This article first published July 17th 2023, updated February 29th 2024

The definition of an AI system is now codified into law

Any organisation who develops, provides and uses/deploys software which meets the AI definition criteria, and whose decisions are utilised within the EU will be subject to the EU AI Act1.

How the definition evolved to where it is today – OECD.AI update

In November 2023, the OECD.AI updated the definition of AI adopted by participating nations, to include an element of adaptiveness following the release of Generative AI and Large Language Models:

An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment

The EU AI Act at 26th January 2024 was updated with the the new OECD definition

II. MAIN ELEMENTS OF THE COMPROMISE

2 Definition of an AI system
The definition of an AI system in Article 3(1) has been modified to align it more closely with the work of international organisations working on artificial intelligence, notably the OECD. Moreover, the corresponding Recital 6 details further the key characteristics of the definition and clarifies that the definition is not intended to cover simpler traditional software systems or programming approaches, which are based on the rules defined solely by natural persons to automatically execute operations. Additionally, the Commission has been tasked to develop guidelines on the application of the definition of an AI system.

Source: https://data.consilium.europa.eu/doc/document/ST-5662-2024-INIT/en/pdf

Article 3 (1) ‘AI system‘ is a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments;

Source: https://data.consilium.europa.eu/doc/document/ST-5662-2024-INIT/en/pdf

What does this definition actually mean in the context of industrial and organisational use cases?

When the EU AI Act comes into force (c. April 2024), it will automatically become law in all 27 member states. It is a law which requires Responsible AI principles and requirements to be implemented throughout the value chain.

All providers and deployers of AI systems within the EU, dependent on the risk category and type, will have to comply with the law within 6, 12, 24 and up to 36 months as defined in Article 85.

Is your organisation aware of its AI system use cases?

Many organisations may not be certain whether their systems are AI or not due to the range of contrasting views from vendors and buyers as to what constitutes AI. It is important that they start their journey towards categorizing these systems as soon as possible. Although the date of enforcement may seem a long way off, 2 years for high-risk systems, road map plans may require review and modification, AI system procurement plans altered, value chain risk assessed, AI literacy and education programmes created and embedded, best practice guides implemented, employee consultation processes undertaken, reporting and escalation procedures developed and so on.

The AI system value chain can be complex, and it is set to increase in complexity due to the ease at which AI system functionality can be added to existing systems through the availability of secure APIs used for integration. Software may utilise widely available algorithms which have been tested for performance in various scenarios both live and retrospectively and achieved a rating from NIST, but how would you know this if the vendor is not transparent for commercial confidentiality reasons and mostly due to there being no requirement to disclose such information previously.

Many AI systems are already embedded in company operational processes. However, procurement teams may not know what questions to ask of vendors. Data teams may not know to inform the leaders that their latest API integration is a LLM, and would the Board understand the language? Is the Board AI literate? have they set an AI policy? Is AI embedded in HR and recruitment software?, is it embedded in manufacturing processes?, facilities management?, behavioural analysys?, Security?. AI systems are highly likely to be everywhere in SMEs up to large corporates. There has been no reason until now to keep track of its use.

Commercial organisations are facing the task of reviewing all software in place in their businesses, identifying AI systems and categorizing the software according to the risk levels set out in the EU AI Act.

Categorizing the AI systems is just the start point. Having done so, if the AI system is defined as being in the prohibited category, the organization must cease use within 6 months of the Act being applied in May 2024.

AI systems categorized in the high-risk category require a number of additional safeguards to be in place: an AI Management System, Fundamental rights impact assessments, risk assessments and ongoing management, specific documentation transparency and explainability requirements, registration of the AI system in an EU database, conformity assessments and certification, post market monitoring, logging and so on. We have not listed them all due to the extent and complexity of compliance. The use case for organizations may vary and this variation may alter the categorization of the AI systems. Furthermore if AI systems are combined, they may attract high-risk characteristics which may not exist if the systems are operated within business functional siloes.

How can Anekanta® help?

Anekanta® was founded by an experienced industrialist with real-world experience of running SMEs and large organisations at Board level, applying complex AI systems to high-risk problems and managing risk and expectation throughout complex value chains where a single technology component can affect every system around it if incorrectly procured, integrated and managed.

Anekanta® is best placed to work with your organisation due to our team’s decades of combined experience in AI systems applications and expertise in identifying the principal features and behaviours of software which has AI techniques embedded but often invisible to the buyer and user. We know how to independently address the needs of commercial organisations due to the deep understanding we have of how commercial organisations actually work.

We know about business, systems and processes, financial management systems, QA management systems, Board governance, risk management, people management, complex supply chain relationships, impact, reputational risk and management, ethical vs commercial decisions, investor and shareholder management and so on. Our knowledge is gained through evaluating and developing businesses and their systems and processes required to enable them to operate effectively. Our team’s experience has enabled us to create unique systems designed to evaluate the AI systems in use and automatically classify them to the definition and requirements in the EU AI Act.

By seeking a professional independent review from Anekanta® we will assess your specific use cases in the context of your organisation and its operating environment, we will provide you with a range of opinions from gap analysis through to checklists of actions you need to undertake to manage risk and move towards EU AI Act compliance.

Who should take action?

If you are an executive leader or policy maker and would like to take advantage of AI for the benefit of your organisation, you will need to understand how to identify it. Having identified it you need to know how to govern it and become compliant with the new EU regulations.

If you would like to find out more about how we can help your organisation prepare for the EU AI Act please can get in touch via our contact form or email ask@anekanta.co.uk

Anekanta®AI Logo

Anekanta®AI and Anekanta®Consulting
AI Strategy | Risk | Literacy | Governance
Contact us | Explore our services | Subscribe to our newsletter | Follow us on LinkedIn

Intellectual Property: © 2016–2026 Anekanta®. All rights reserved. Unless otherwise expressly stated, all materials published on this website, including the Anekanta® AI Governance Framework for Boards, the 12 Principles, and all AI risk and impact evaluation methodologies, software, models, diagrams, text and materials, are proprietary intellectual property of Anekanta®. No reproduction, adaptation, distribution, or commercial exploitation is permitted without prior written authorisation. No rights are granted other than those expressly stated. The Anekanta® AI Governance Framework and 12 Principles are developed, maintained and continuously enhanced as part of Anekanta®’s proprietary governance architecture.
Professional Disclaimer: The information provided on this website is for general informational purposes only and does not constitute legal, regulatory, financial or professional advice. Any reliance placed on the information is strictly at the user’s own risk. Professional advice should be sought in relation to specific circumstances through a formal engagement with Anekanta®.
Use of Generative AI: Generative AI tools may be utilised in research and drafting processes. All published materials are subject to substantive human review, professional judgment and oversight prior to release.

Visit our full range of services here.

Anekanta® was formed in 2016 and took up activities in late 2019. We have extensive experience in the field of AI research and risk, and a proven track record with global clients.

  1. https://www.europarl.europa.eu/legislative-train/theme-a-europe-fit-for-the-digital-age/file-regulation-on-artificial-intelligence?sid=7701 ↩︎

We'd love to hear your views.....

This site uses Akismet to reduce spam. Learn how your comment data is processed.