New EU AI Regulation – Artificial Intelligence Act

Anekanta Consulting welcomes the EU proposals released today

We believe that the new regulation will provide clarity to developers and users of AI based systems and will hold all stakeholders to account for the impact of AI when used in high risk scenarios.

The EU is laying down harmonised rules on artificial intelligence which will also amend certain Union Legislative Acts

Against this political context, the Commission puts forward the proposed regulatory framework on Artificial Intelligence with the following specific objectives:

• ensure that AI systems placed on the Union market and used are safe and respect existing law on fundamental rights and Union values;

• ensure legal certainty to facilitate investment and innovation in AI;

• enhance governance and effective enforcement of existing law on fundamental rights and safety requirements applicable to AI systems;

• facilitate the development of a single market for lawful, safe and trustworthy AI applications and prevent market fragmentation.

Source: Extract from the new proposal

What this means for AI providers?

The new proposal which still has to go through the EU legislative process sets out a number of new requirements which must be met before AI based products can be placed onto the market in EU.

We have provided a brief insight into a number of the requirements:

A new AI CE marking based on meeting a number of criteria including detailed technical documentation, transparency around training data sets, detailed descriptions of how the AI makes decisions and logging of the AI decisions.

Ongoing market monitoring which includes continuing assessments and close scrutiny of AI which learns, to ensure that the learning does not introduce bias.

An outright ban on AI which manipulates consumer choices or opinion subliminally, AI which exploits vulnerabilities and which evaluates trustworthiness through the analysis of social behaviour or personality characteristics observed over time.

Highly restricted use of biometrics based identification systems such as automated facial recognition by law enforcement in publicly accessible places, unless it is explicitly authorised by the governing authority and that it is for a very specific purpose.

The new regulations will take priority over certain clauses in the EU Directive 2016/680 for Law Enforcement.

The new proposed regulations are risk based rather than being prescriptive about AI technologies.

High-risk AI categories include:

1. Biometric identification and categorisation of natural persons

2. Management and operation of critical infrastructure

3. Education and vocational training

4. Employment, workers management and access to self-employment

5. Access to and enjoyment of essential private services and public services and benefits

6. Law enforcement

7. Migration, asylum and border control management

8. Administration of justice and democratic processes

If AI is trained using biased historical data it is likely to produce biased recommendations. We see the new proposed regulations as a start point towards preventing the amplification of societal and institutional bias.

We will be providing advice and guidance to our AI developer and user clients over the coming weeks.

If you have any questions please contact us at ask@anekanta.co.uk or fill in our contact form here.

1 comment

We'd love to hear your views.....

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Discover more from Anekanta®Consulting and Anekanta®AI

Subscribe now to keep reading and get access to the full archive.

Continue Reading