UK AI Regulation – Which Direction?

Anekanta® response to the UK Government’s AI Regulation Policy Paper

In July 2022, the UK Government published its policy paper for the regulation of AI. We have laid out the key points expressed in the Executive Summary and the questions asked in the public consultation. Anekanta®’s response, titled “UK AI Regulation – Which Direction?” is laid out below.

Executive Summary

In the National AI Strategy, the government set out an ambitious ten-year plan for the UK to remain a global AI superpower. The UK is already a leader in many aspects of AI, with a thriving ecosystem and a strong track record of innovation. But there is more to do to harness the enormous economic and societal benefits of AI while also addressing the complex challenges it presents.

Establishing clear, innovation-friendly and flexible approaches to regulating AI will be core to achieving our ambition to unleash growth and innovation while safeguarding our fundamental values and keeping people safe and secure. Our approach will drive business confidence, promote investment, boost public trust and ultimately drive productivity across the economy.

The UK has a world leading regulatory regime – known for its effective rule of law and support for innovation. We need to make sure that our regulatory regime is able to keep pace with and respond to the new and distinct challenges and opportunities posed by AI. This is key to remaining internationally competitive.

We are therefore proposing to establish a pro-innovation framework for regulating AI which is underpinned by a set of cross-sectoral principles tailored to the specific characteristics of AI, and is:

Context-specific. We propose to regulate AI based on its use and the impact it has on individuals, groups and businesses within a particular context, and to delegate responsibility for designing and implementing proportionate regulatory responses to regulators. This will ensure that our approach is targeted and supports innovation.

Pro-innovation and risk-based. We propose to focus on addressing issues where there is clear evidence of real risk or missed opportunities. We will ask that regulators focus on high risk concerns rather than hypothetical or low risks associated with AI. We want to encourage innovation and avoid placing unnecessary barriers in its way.

Coherent. We propose to establish a set of cross-sectoral principles tailored to the distinct characteristics of AI, with regulators asked to interpret, prioritise and implement these principles within their sectors and domains. In order to achieve coherence and support innovation by making the framework as easy as possible to navigate, we will look for ways to support and encourage regulatory coordination – for example, by working closely with the Digital Regulation Cooperation Forum (DRCF) and other regulators and stakeholders.

Proportionate and adaptable. We propose to set out the cross-sectoral principles on a non-statutory basis in the first instance so our approach remains adaptable – although we will keep this under review. We will ask that regulators consider lighter touch options, such as guidance or voluntary measures, in the first instance. As far as possible, we will also seek to work with existing processes rather than create new ones.

Extract from the Executive Summary. Command Paper: CP 728 © Crown copyright 2022 ISBN: 978-1-5286-3639-1

A public consultation ended 26th September 2022 which sought answers to a number of key questions:

  1. What are the most important challenges with our existing approach to regulating AI?
    Do you have views on the most important gaps, overlaps or contradictions?
  2. Do you agree with the context-driven approach delivered through the UK’s established regulators set out in this paper?
    What do you see as the benefits of this approach? What are the disadvantages?
  3. Do you agree that we should establish a set of cross-sectoral principles to guide our overall approach?
    Do the proposed cross-sectoral principles cover the common issues and risks posed by AI technologies? What, if anything, is missing?
  4. Do you have any early views on how we best implement our approach?
    In your view, what are some of the key practical considerations? What will the regulatory system need to deliver on our approach? How can we best streamline and coordinate guidance on AI from regulators?
  5. Do you anticipate any challenges for businesses operating across multiple jurisdictions?
    Do you have any early views on how our approach could help support cross-border trade and international cooperation in the most effective way?
  6. Are you aware of any robust data sources to support monitoring the effectiveness of our approach,
    both at an individual regulator and system level?

UK AI Regulation – Which Direction?

Anekanta® is an British company with a UK and International client base. We work with global organisations who develop and apply AI technologies in security and critical infrastructure which are subject to different rules, regulations and guidance across geographical regions and jurisdictions.

Our customers are seeking clarity, legal certainty and operating guardrails to enable them to make decisions about their use of AI technology. Therefore, our response to the UK Government consultation is mindful of our real world experience.

We have published an extract from our responses, for a full copy of the document please fill in the contact form or send a request to ask@anekanta.co.uk.
The UK Government’s questions are numbered headings, the Anekanta® response issued 26th September 2022 is shown below each.

1.   What are the most important challenges with our existing approach to regulating AI?

The regulation of AI technologies is in its infancy. Although the OECD and EU High-Level Experts Group created a set of principles which should underpin all AI trustworthiness, the operationalisation of these principles appears to be driven by the use of the AI technologies in a range of sectors. There is no overarching authority through which sectors can validate their ideas around how AI should be governed. As a result we believe there is duplication of effort, also inconsistency in the approach to assessing risk.

We consider that the existing approach may make it difficult for organisations to assess risk which in turn may cause inertia and slow the uptake of AI innovation by buyers.

While waiting for regulation in certain areas, the unfilled void has almost certainly already stifled innovation due to the confusion around the ethical and legal implications arising from the various use cases for AI, for example, facial recognition. The ethical and legal considerations are domain and context driven and although certain aspects are covered slightly in existing legislation, the actual use cases are not. When both low and high risks coexist, the path through legislation which should assist in the differentiation between the two levels of risk, is not clear.

Furthermore, a range of technologies when implemented in various forms have already been proven to be highly harmful to individuals. The evidence to support this point does not necessarily sit within the UK. The existing approach appears to ignore learnings gained from countries outside the UK and instead implies that UK will wait for harms to be experienced by British Citizens before action will be taken to regulate the use of the technologies.

We refer to the Dutch Government’s use of a predictive algorithm which caused thousands of people to be racially profiled and incorrectly automatically fined for child welfare fraud they did not commit. Amnesty International published a deeply disturbing report on the issue [2].

In the case of the Dutch Government, serious harms were suffered by thousands of people leading to the Government resigning. A special review board found that the system had been operated without understanding, transparency and explainability. Nor were checks or controls in place to challenge the decisions for over 5 years.

Another example may be Clearview AI who attempted to implement facial recognition software and make it available on a mass scale. The harms were not anticipated by them due to a lack of specific regulation for facial recognition software and a tricky to navigate piece of primary legislation (DPA 2018/UK GDPR). The UK’s data protection legislation was applied by the regulator after various complaints were raised to the ICO. The regulator decided that the privacy of potentially millions of UK citizens may have been compromised without consent, and they subsequently fined Clearview AI £7.5m.

This is an example of how through a lack of specific regulation, harms occurred, after which time action was taken. This may have cost Clearview AI a large amount of money in fines, however the harm has already been done to the privacy of the individuals whose data may be embedded in their system.

There is no viable route for individuals to address this through the regulator, the regulator does not address harms, only enforces the system. The individuals would have to fund their own civil actions against Clearview AI which is generally not viable for many people.

We do not consider it ethical to know that harms are likely to occur, and then to find a way of applying existing legislation later after the severity of the harms is experienced. No one should be harmed, and the legislation and supporting structures should make it really clear how developers and users should prevent that.

a.   Do you have views on the most important gaps, overlaps or contradictions?

The UK approach does not strengthen the rights of the Citizen who has no regulatory regimes to call upon which protect and remedy breaches of their rights, instead there is regulation at “system” level which imposes punitive fines on organisations which abuse human rights, but these measures appear to be taken when there are large numbers affected. The individual’s rights are no greater or lesser when many individuals are affected. Every individual suffers. The proposed approach to AI does not move forward at all towards protecting individual rights from abuse by systems that are not regulated and can be operated without any care, audit or consistent process.

The regulators do not have the instruments available to them at this time to adequately guide AI developers and buyers towards making informed, safe, and risk mitigated decisions.  The regulators need guidance from Government which may be set out in an overarching piece of AI legislation which defines risk, trustworthiness and explainability etc.

In summary, the most obvious challenge in the existing approach is the apparent lack of planned overarching legislation which lays out principles which should be applied to all AI technologies, and which consider the mechanisms via which potential harms may be remedied, for example, punitive fines, and adjacent to that, support of the individuals who have been harmed.

2.   Do you agree with the context-driven approach delivered through the UK’s established regulators set out in this paper?

We agree with the context driven approach if the lower tier of regulators sit underneath an overarching AI regulator. However, it seems that the regulators have created plans and roadmaps which set out their ideas leading towards governance within 5 years without harmony.

There is little or no mention of how the AIs already in use will be pulled into a new regime, nor how those which will be placed in the market in the intervening time, how they should be developed and introduced safely.

A context-driven approach could work; however, this needs to be fleshed out with the involvement of a wide range of stakeholders from both academia and industry.

a.   What do you see as the benefits of this approach?

If sector specific regulators were equipped with an overarching piece of legislation which lays down some basic principles, the regulators may be able to create frameworks through which they may assess and monitor the AIs in use and planned to be used in their relevant sectors.

The medical device sector presents an interesting case study where an AI based medical device can be certified under the current rules, alongside physical medical devices. It seems that the plans to create modifications to the medical devices regulations are still in progress which could mean that any AI based medical device may be released to market without any AI relevant controls. In the case of a self-diagnostic tool, this could lead to many instances of important and potentially life-threatening symptoms being ignored on the basis that the AI made a determination that was medically unsafe, and the user trusted the system due to its medical certification without knowing exactly what the AI had been certified to do safely.  

b.   What are the disadvantages?

Regulators may inadvertently create further confusion through an unharmonized patchwork of guidance.  This may be caused by there not being an overarching body nor a piece of primary legislation which sets out the rules which must govern all AIs, for example, the definitions of risk and trustworthiness.

3.   Do you agree that we should establish a set of cross-sectoral principles to guide our overall approach?

There should be overarching cross sectoral (horizontal) and vertical principles which are risk and harms based laid out in a primary piece of legislation, regulated by an AI regulator, and which are adopted by the sector specific regulators.

a.   Do the proposed cross-sectoral principles cover the common issues and risks posed by AI technologies? What, if anything, is missing?

The draft policy position regarding AI regulation leaves a number of gaps. A specific concern is the approach to hypothetical vs actual risk which is open to interpretation and could mean “do anything” and clean up the harms later.

Industry is quite simple; it operates within frameworks of rules and standards which enable company directors to evaluate risk, liability, opportunities, and threats and so on; to enable businesses to operate with limited liability within the boundaries of the applicable company laws. Risk in the context of business is subjective. The current company law does not consider the risks and harms outside the shareholder/director relationship. A company can decide whether there is a risk of harm to external stakeholders and there is nothing in company law which holds them to account except, for example, if the company activity raises concerns with the Environment Agency, HSE, HMRC who can take action against directors with regard to specific breaches to their rules. There is no proposed AI Agency as far as we are aware.

Issues, risks and harms posed by AI technologies have not necessarily been experienced as yet. There is a degree of anticipation based on known issues, however, AIs are being enhanced constantly and therefore the ways in which they can harm people can also continually change. These ways need to be considered and a far wider reaching set of rules established.

A database of AIs in use covering a range of contexts may lend itself to providing useful material from which the issues and risks may be determined, also to establish various commonalities across the implementations.

Transparency and explainability are two key principles which seem to be missing from the existing proposals. There are no outward focused plans as far as we can see which require AI developers to inform the user that they are interacting with an AI based system as opposed to a human.

A summary of harms could be created to assist the developer in understanding who may be affected by their AI and in what way. The current framework of data protection impact assessments falls short of being fit for purpose for the context of AI. People are affected by AI, it is not their data that is affected. Every aspect of their life and wellbeing could be harmed by an AI.  An example given could be a recommendation engine or nudging algorithm which subliminally influences them without their knowledge to make a decision which may be harmful to them.

This is not a data protection issue; it is an exposure problem which could be analogous to exposure to radiation. The user (individual) may not know they have been exposed until the harms emerge.

4.   Do you have any early views on how we best implement our approach?

There should be a move towards the creation of an overarching piece of legislation which forms a framework of risk and trustworthiness through which the assured use of AI can be effectively monitored. Such a framework should form the backbone for a number of sector specific standards which define in detail what and how each sector should develop, evaluate and monitor the AIs put onto the market.

a.   In your view, what are some of the key practical considerations?

The AI assurance framework should be supported by the overarching legislative framework, and a detailed set of evaluation and performance testing standards which allow developers to meet certain safety and ethical criteria, also to enable the buyer to make informed decisions based on common principles which are baked into the AI development and assured through a type of certification of accreditation, much like CE or UK CA marking.

b.   What will the regulatory system need to deliver on our approach?

An AI regulator should sit in a position whereby it can assess, adjudicate and harmonise the efforts of the sector specific regulator’s actions with regard to regulation. Without an AI regulator, sector regulators may diverge or worse make undemocratic decisions which have not been scrutinised by the normal processes.

c.   How can we best streamline and coordinate guidance on AI from regulators?

If there is an overarching AI regulator which calls upon a piece of primary legislation, this new body may act as the coordinator of the regulators.

5.   Do you anticipate any challenges for businesses operating across multiple jurisdictions?

The UK standards ecosystem for AI based technologies should closely align with the EU AI Act (and other highly regarded international policies) regardless of the UK Government position. If the UK position drifts too far from the EU position, innovation may be stifled. This is due to the barriers to entry to the EU market being different. This is important because developing to different laws in different regions can increase the cost of exporting to these markets.

The cost of compliance and ongoing monitoring could effectively double if two versions of the same innovation must be developed and maintained. It is therefore obvious that developers may seek to meet the requirements of the highest bar we have seen so far and that is the EU AI Act. The evidence to support this theory is rooted in many years’ experience to date, developing and taking to market highly technical products and software which have to meet different rules in EU and elsewhere when sold outside the UK.

The principles we adopt involve finding the highest bar and ensure we can always meet it. This means that the protections and features in the product may exceed the requirements in other markets but the reduction in development cost and maintenance outweighs the disadvantages of having an over-featured product in a market with a low regulatory bar. This of course can change and is dependent on the size of the market, the product, local market competitors and a range of other factors such as supply chain, raw materials and so on.

a.   Do you have any early views on how our approach could help support cross-border trade and international cooperation in the most effective way?

We consider that cross-border trade and international cooperation are driven by harmony in the approach to risk and regulation. Developers need support to help them comply with EU and other international regulations. The UK regulators should consider what an AI adequacy framework may look like in order to give developers the assurance that they can sell their UK developed AI into international markets without having to redevelop based on a different set of principles, some of which have to be designed into the software itself.

6.   Are you aware of any robust data sources to support monitoring the effectiveness of our approach, both at an individual regulator and system level?

We have suggested that an AI database is set up by the AI regulator, which contains a record of the AIs in use, their type, the sector in which they are used and the risks and potential harms which may be suffered if the AIs either get out of hand (autonomously) or may be poisoned by bad actors.

About Anekanta Consulting

Anekanta®Consulting is an AI innovation and strategic advisory company founded by an experienced security industry technology leader. Anekanta®Consulting works with international stakeholders across sectors and domains including defence, security and video surveillance, critical infrastructure, and smart cities. We enhance the innovation, product, and corporate strategies of our global technology clients through our specialised services. These range from strategy consulting to horizon scanning and M & A research. We navigate the complex legislative landscape including the EU Artificial Intelligence Act and GDPR requirements for the use of AI across horizontal industrial applications. We also lead and influence the ethical and legal use of controversial AI technologies such as facial recognition software. We deliver outstanding AI and technology strategy outcomes which augment strategic decision making at the highest level. Our teams are built with the customer requirements at the centre of our mission. We involve the best talent to achieve outstanding results ahead of the market. Teams currently operate in small groups from 2-3 people for a project, and upwards of 25 people for a longer deliverable.


[1] Presented to Parliament by the Secretary of State for Digital, Culture, Media and Sport by Command of Her Majesty. Laid on Monday 18 July 2022. Command Paper: CP 728 © Crown copyright 2022 ISBN: 978-1-5286-3639-1

[2] Xenophobic Machines, Amnesty International. https://www.amnesty.org/en/documents/eur35/4686/2021/en/ (Accessed 23/9/2022)

For further information about Anekanta® Consulting’s services in the area of AI risk assessment, please contact us at ask@anekanta.co.uk or fill in the contact form.

We'd love to hear your views.....

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Discover more from Anekanta®Consulting and Anekanta®AI

Subscribe now to keep reading and get access to the full archive.

Continue Reading