It is encouraging to see the assembly of a central risk function laid out in the UK Government AI Regulation White Paper.

The central risk function may be seen by some as a kind of AI Regulator ‘in waiting’.

The business community is identified as being responsible for reporting new risks (and harms) into their sector regulator, who in turn will assess and/or refer to central risk, and in theory, this may result in action by central government.

Challenges may arise when business activities span across multiple regulators – as most do in some way.

UK Government central risk function explained.

Source of infographic: Policy paper A pro-innovation approach to AI regulation

Hypotheical vs actual risk.

The white paper suggests little focus on what might happen if AI is misused in the UK, rather, it seeks evidence of what harms are actually happening. Therefore, AI regulation will not occur – yet – until the evidence supports the case for such regulation and of what.

UK businesses be aware that significant harms have already occurred elsewhere in the world and these harms are now identified as risk categories in new legislation, for example the EU AI Act due to be agreed April 2023.

Business leaders take note.

The burden of responsibility for safe AI development and use, is depicted as being on the shoulders of business leaders to ensure that they have robust risk, audit, control, ethics, remedial and monitoring processes in place. Such controls should result in the gathering of evidence which must be reported voluntarily to Government indirectly via the range of regulators.

One may raise the question about where the voice of the citizen appears in the framework and without stronger, clearer direction in the Companies Act 2006 s.172, Directors are not specifically compelled by law to address their potential duty of care to the individual (social responsibility) or the environment. The flavour of the white paper is business growth and profit oriented at this point.

How Anekanta Consulting may help.

Anekanta Consulting published a framework for the ethical implementation of AI in Feb 2020 to assist businesses in their journey towards the safe and ethical use of AI in their businesses. The framework is available under Creative Commons Licence. Please cite Anekanta Consulting if you use the framework.

Whether you need our help may be dependent on your internal capability, skills at board level and the complexity of your business, the jurisdictions within which it operates and the level of risk and potential harms you can identify which may affect employees, customers, supply chain and the wider community of stakeholders including the public.

We are at hand to help you in this process and only a phone call or email away!.

Words of caution in lay terms about GDPR which applies to all data sets containing any personal data, either direct or inferred which are used or combined to train AI, and which the AI processes to perform its functions. If the data originates in the UK or EU and the AI affects UK or EU citizens, fines of up to 4% of revenue are levied by the data regulators if breaches occur. GDPR also sets out the law with regard to automated decision making, profiling and the collection and storage of biometric data which can mean the face print, fingerprint, personality traits, behaviour, gait, voice, race, religion, membership of Unions and so on.

We are an ethical company with a mission to influence the wider stakeholder community for the greater good.

Our framework was adopted with our permission, by the Institute of Directors (IoD) as part of their ‘AI in the Boardroom’ business paper. AI in the Boardroom was created by the IoD Expert Advisory Group for Science, Innovation and Technology. Our founder sits on the group alongside her technology and AI peers.

Find out how Anekanta Consulting may help your business understand what it should and must do to prepare for the new measures, contact us at, visit our contact form.

Latest news and insights

  • How to ensure AI Trust, Risk Mitigation and Continuous Improvement
    Responsible AI practice enhances reputation, profit and reduces risk. Here we explain what you can do to prepare your organization for success and in doing so align with emerging regulations.
  • Be responsible and de-risk your AI facial recognition software deployments
    Anekanta® AI have developed a number of independent, vendor agnostic AI governance frameworks expertly designed to assess the safety of high-risk AI and the impact on stakeholders. Our Facial Recogniton Privacy Impact Risk Assessment System™ contains a small sub-set of our wider capability. Find out more.
  • Anekanta®AI: Comply with the EU AI Act
    One of the key requirements of the EU AI Act is that organizations must assess the risks posed by their AI systems before deploying them. Anekanta®AI can help developer and user organizations to comply with the Act by providing systematic and comprehensive assessments of risk.
  • Anekanta®AI featured in UK Government AI Assurance Portfolio
    The Anekanta® AI, AI Risk Intelligence System™ for biometric and high-risk AI supports organisations to build consistency into their governance processes. The system leads to the creation of a new comparable data set about the AI, which can be analysed to reach conclusions about the impact and risks associated with using certain types of software.
  • FaceComply: A partnership between Corsight AI, Anekanta®AI, Advent IM
    Corsight AI Partners with Anekanta®AI and Advent IM to Enable Facial Recognition Full Compliance through FaceComply. Uniting expertise in AI, governance, risk management, data protection, and cybersecurity, to ensure safe implementation of facial recognition technology