Planning your High Risk AI Strategy?

AI innovators need to act now and make design decisions which anticipate the proposed EU AI regulation

Following the release of the proposed EU Artificial Intelligence Act (EU AI Act) in April 2021, we published our initial position welcoming the move by the EU towards much needed clearer guidance for the developers, providers and users of high risk AI.

Here we examine the implications and outline some of the steps which should be considered by innovators now in anticipation of the AI regulation proposal being enshrined into EU Legislation within the next 18-24 months (2022/3).

Why is this important?

The proposed regulation for artificial intelligence reaches far beyond the boundaries of GDPR. Developers are currently compelled to produce AI technology with privacy and security designed in, also the market demands to be making low risk technology decisions. Users have a number of obligations including to ensure subjects have provided expressed consent and to respect the impact the use of technology on the privacy of the individual.

Punitive fines have been imposed by the data protection authorities across the EU Member States and in the UK to ensure that companies suffer the consequences of breaching trust. Regular cyber health checks and Data Protection Impact Assessments can help companies to avoid the wrath of the data protection authorities.

However, such fines must serve as a wake up call to business leaders operating within/with the EU to compel them to correct the lack of awareness and take action at a board governance level to protect the privacy and security of personal data.

The largest fine imposed by the UK authorities was upon British Airways who failed to secure its customer’s data resulting in a breach which cost the company over £20m. A lack of leadership direction, basic cyber hygiene practices and operational shortcomings were blamed for the systemic failures.

GDPR also defines strict use case scenarios for otherwise prohibited data processing of biometrics data which is classifed under Article 9 of GDPR as special category data. It also guides the use of automated decision making and privacy and security by design. But GDPR does not go far enough in regulating AI based technologies, leaving some abiguities which are difficult to navigate such as the use of automated facial recognition systems (biometrics) for law enforcement purposes.

Prohibited AIs

The proposed EU regulation defines a number of levels of risk associated with the use of certain AIs, and imposes an outright ban on those which subliminally manipulate the individual resulting in physical or psychological harm; also the exploitation of children or mentally disabled persons, social scoring and remote biometric identification for law enforcement purposes (except in certain use cases).

Although included in the prohibited list, the use of remote biometric identification software for law enforcement purposes in publicly accessible spaces is actually allowed if special permission has been granted by the regulatory authority for each use case. In these circumstances remote biometric idenfication software sits in the high risk category with additional 3rd party assessment and reporting requirements.

Summary of prohibited AIs

Prohibited AIExample
Subliminal manipulation
resulting in physical/psychological harm
An inaudible sound is played in truck driver’s
cabins to push them to drive longer than is healthy and safe. AI is used to find the frequency maximising this effect on drivers.
Exploitation of children
or mentally disabled persons
resulting in physical/psychological harm
A doll with an integrated voice assistant
encourages a minor to engage in progressively
dangerous behaviour or challenges in the guise of a fun
or cool game.
General purpose social scoringAn AI system identifies at-risk children in
need of social care based on insignificant or irrelevant
social misbehaviour of parents e.g. missing a doctor’s
appointment or divorce.
Remote biometric identification for
law enforcement purposes in publicly accessible spaces (with exceptions)
All faces captured live by video cameras
checked, in real time, against a database to identify a terrorist
Source: European Commission

What is high risk AI?

High risk AI is defined through a number of specific examples which work across horizontal markets. Sectoral use is also defined resulting in a matrix of types and possible uses.

In addition to remote biometric identification systems, generally AIs which form part of a safety system or work as a standalone product for safety purposes are classified as high risk and therefore subject to strict pre-market and post market requirements and monitoring. Such AIs are also still subject to the sector specific regulation for example safety systems in vehicles.

The challenge for AI innovators and developers is in determining whether their AI could be considered to be high risk or may be capable of being used in one of the high risk scenarios.

A certain amount of anticipatory thinking is required in order to overcome the risks rooted in the uncertainty around the shape and form of the final legislation.

Summary of high risk AIs

SAFETY COMPONENTS OF REGULATED PRODUCTS
e.g. medical devices, machinery, which are subject to third-party assessment under relevant sectorial legislation
CERTAIN (STAND-ALONE) AI SYSTEMS IN THE FOLLOWING FIELDS
Biometric identification and categorisation of natural persons

Management and operation of critical infrastructure

Education and vocational training

Employment and workers management, access to self-employment

Access to and enjoyment of essential private services and public services and benefits

Law enforcement

Migration, asylum and border control management

Administration of justice and democratic processes
Source: European Commission

Implications for high risk AI developers

The strategy and development path for AI can be long and complex. Decisions need to be made early in the design process to ensure that the resulting technology is ready to pass the strict ex ante, CE marking and post market monitoring required by the EU regulation. We expect the database of high risk AIs to be up and running as soon as 2022 at which time AI providers will be required to register their technology on a publicly accessible database.

The challenge now for AI developers is rooted in navigating where the AI development sits within what has now become a venn diagram of use cases and overlapping relevant legislation. It is likely that there will be a gravitation towards the strictest standards to avoid the development of multiple versions for different regions. This is how GDPR has influenced the western world policy making processes. Regions such as the USA which previously lacked any federally imposed privacy rules are now creating state wide legislation such as CCPA which at least mirror the GDPR requirements and federal government legislation is on the horizon.

Establish and implement risk management processes and in light of the intended purpose of the AI system

  • Use high-quality training, validation and testing data (relevant, representative etc.)

  • Establish documentation and design logging features (traceability & auditability)

  • Ensure appropriate certain degree of transparency and provide users with information (on how to use the system)

  • Ensure human oversight (measures built into the system and/or to be implemented by users)

  • Ensure robustness, accuracy and cybersecurity

Designing in ethics and best practice

GDPR already requires developers to design in privacy and security to their solutions. The EU regulation goes further to require the validation of training data and that auditable data governance measures are in place. Developers of automated facial recognition software have already anticipated the potential challenges to the bias of the training data and therefore have taken steps to ensure that data is synthetically generated to create a wider and more representative demographic. Accuracy rates of high quality algorithms have increased to up to 99% according to NIST.

In the intervening time prior to the ratification of the EU regulation, developers can continue innovating within the ethical frameworks which already exist, for example the high level principles created by the OECD, and the EU ethics guidelines for trustworthy AI, which are becoming woven into sector specific documents.

A specific example of an industry which anticipated the need for guidance and standardisation around the use of high risk AI is the security industry which is a substantial generator of edge sensor data. This data becomes more contextually valuable after AI analysis is applied, also millions of video surveillance images are subject to object detection and photo grammatical analysis in addition to the application of automated facial recognition for both verification and identification purposes.

As the proposed EU regulation specifically cites automated facial recognition software (remote biometric identification) with certain conditions, as high risk AI, the security industry has pre-empted such a move with the creation of specific guidance around its ethical and legal use.

The BSIA created a first of its kind guide to the ethical and legal use of automated facial recognition software which was published in early 2021. This landmark guide provides practical guidance to users of this technology in order to advise how to put it to best use while respecting the privacy and human rights of its subjects. This is an example of vision and proactivity in this sector in order to ensure that beneficial technology can be used safely, bearing in mind that the criminal does not wait.

We expect to see further sector specific guides emerging over time as developers seek clarity around the classification and potential use cases for their technology.

How can we help?

Anekanta Consulting specialise in advising stakeholders and innovators providing technology across horizontal and vertical markets and domains; from defence, security and video surveillance to retail and transportation. These are industries whose use of high risk AI will potentially be affected by the new EU regulations.

We are here to help you to assess your AI risks at an early stage and guide you along the journey.

If you would like to learn more about Anekanta Consulting’s expertise in this area please contact us on +44 020 3923 0230 or ask@anekanta.co.uk

1 comment

We'd love to hear your views.....

This site uses Akismet to reduce spam. Learn how your comment data is processed.