Anekanta® AI Governance Framework


Anekanta®’s AI governance framework was originally developed in 2020 and independently assessed by the University of Oxford, prior to the recent explosion in the awareness of AI as a beneficial business tool.

The foundations of Anekanta®’s ethical principles framework

The purpose of this set of principles is to ensure ethical AI practices throughout the organisation. It is clear that AI needs to be on the Board agenda and considered seriously as part of the CSR requirements. AI should not be confined within the realms of IT, although the CIO may take responsibility for implementation and management.

(Update July 2023) The Institute of Director’s (IoD) research of 2022 further supported the theories that boards need help with the assessment and management of the use of AI within their organisations. In 2023, the IoD published their ‘AI in the Boardroom‘ business paper for directors, which was developed by the IoD Expert Advisory Group for Science Innovation and Technology. Anekanta®’s founder is a member of the group and brought the idea that the IoD needed a AI framework and gave them permission to use the Anekanta®’s intellectual property, which has since become the foundational structure for IoD’s document.

We recommend the following framework which serves as a blueprint for any organisation in any sector, planning their AI strategy.

1. Board accountability

The Board is accountable both legally and ethically for the positive use of AI within the organisation. Ethical principles, policies checks and measures must be in place and rigorously monitored. This is to ensure that the AI is safe, transparent, does not have a negative effect on the well being of employees and stakeholders, and prevents the introduction of bias into automated processes and decisions. The Board holds the final veto on the implementation and use of AI in the organisation.

2. Audit and measure

The ethical principles must be audit-able and measurable; they should be embodied in the ISO 9001:2015 quality system to ensure a consistent approach to the evaluation and use of AI by the organisation.

(Update July 2023) A new AI management standard will be available soon from BSI ISO/IEC 42001:2023. This serves as a standalone management system for the development and use of AI systems and is expected to become the de facto standard for safe AI management globally.

3. Ethics committee

The ethics committee which includes employee representatives should be formed to oversee AI proposals and implementations. Proposals should include a risk assessment, impact assessment and evaluation of the transparency of the AI’s decision making process.

The committee must recommend to the Board whether the AI implementation may have a beneficial effect without negative impact. The ethics committee must examine the risk of bias becoming embodied in the AI and propose measures to prevent and remove bias.

4. Set high level goals

High level goals for the use of AI in the organisation must be created in line with its vision, mission and values. The AI must be to the benefit of the organisation, its employees and customers. Examples of the goals of AI include

a) augmenting human tasks

b) enabling better faster human decisions

c) preventing bias

d) not causing harm to any employee or stakeholder

5. Data sources

In the definition of the purpose of the specific AI implementation the sources of data must be identified and documented. There must be a method of detecting and reporting bias. If bias is discovered, action must be taken to identify the source, discover why it is there and remove it from the AI. The data sources must be preserved and any adjustments documented for traceability. KPIs must be implemented to keep the bias out of the organisation and out of its data sets.

6. Impact assessments

Impact assessments must be undertaken which consider the possible negative effects and outcomes for employees whom interact with the AI or whose jobs may be affected. The approach to the change of any job purpose must be in line with the organisation’s policy and compliant with the relevant employment, privacy and equality laws. The employees must be given sufficient information about the potential use of the AI and the opportunity to provide feedback to the Board via the ethics committee. The employees must be advised so they are aware of what processes are being undertaken by AI even if they do not directly interact with it e.g. they must be informed and aware when they are interacting with an HR chatbot.

7. Training

The training needs of the employees must be taken into account and programmes must be implemented in order to explain the purpose of the AI, to train on its use and to inform them of the ethical framework created and implemented in the company in order to prevent bias. The employees must be trained on the system used to monitor and report bias.

8. Privacy and compliance

The AI must be designed for privacy and audited to ensure compliance with data privacy legislation such as GDPR the upcoming EU AI Act and organisation policies. The technical teams responsible for implementation must be trained on the ethics framework and to challenge the AI developers to ensure that the transparency of the AI decision making is understood and whether any human final decision is needed or not. The AI technical teams must report their findings to the ethics committee as part of the AI proposal process.

9. High Risk – New EU AI Regulation

On 21st April 2021, The European Commission published their proposal for the regulation of Artificial Intelligence. The new regulations set out a risk based approach to the implementation of AI horizontally across sectors, also vertically, providing sector specific guidance for AI in law enforcement, education, safety systems and so on. The impacts are going to be wide reaching.

The responsibility and accountability is clearly defined through the entire supply chain. The developer, provider and user are all accountable for the safe implementation of AI.

(Updated July 2023) Before AI is developed or deployed, a fundamental rights impact assessment must be carried out to determine the potential for harm to the health, safety and fundamental rights of people, and further from there we help our customers determine what level of transparency and ex. ante, CE Marking and post market monitoring is required. Since the release of the proposal in April 2021, a substantial update occurred in November 2022, agreed by the European Council December 2022. 600 further amendments were agreed by the European Parliament in June 2023 which significantly change many of the prohibited and high-risk requirements for example, a complete prohibition of the use of live remote biometric identification in any publicly accessible space whether publicly or privately owned and operated. Furthermore, emotion detection is now banned in workplace, employment and law enforcement scenarios. Generative AI has been included, and the punitive fines increased to a maxium of €40m or 7% global revenue. These updates provide an insight into a small number of the changes. Anekanta® have a team of experts in this field within the Anekanta® AI division and offer an impact and risk service which informs and guides our customers towards the right policies and procedures, which may include the need for self-certification and CE marking where required.

(Updated July 2023) We recommend that you contact Anekanta® to discuss implications for developers, providers and user/deployers of AI Systems or their decisions in the EU in anticipation of the proposal becoming ratified into EU Law early 2024 and enforceable early 2025.

10. Secure by design

The AI must be secure by design and stand the scrutiny of external test and certification processes such as Cyber Essentials Plus and ISO/IEC 27001:2022.

This may include penetration testing which may be used to ensure that the data sets used in the AI cannot be breached.

11. Testing

The AI must be tested prior to any organisation wide implementation to ensure compliance with the ethics framework and the scope of the implementation. The ethics committee makes recommendations to the Board to proceed with full implementation, adjust the scope, or to veto if conditions are not met.

The decision to implement rests with the Board as does accountability for ongoing safe and consistent performance which does not deviate from the intended purpose.

12. Review cycle

Regular reviews must be undertaken to monitor the decisions of the AI and audit them against the intended use and ethical framework.

If the AI deviates from the purpose and ethics in any way, the deviations must be documented, reported to the ethics committee and corrective actions implemented within a reasonable period since discovery depending on the severity of the issue based on the risk and impact assessment.

If you would like to learn more about the effective and ethical use of AI in business, and the steps you need to consider to become compliant with the new EU AI Act please contact us at

Copyright Anekanta Consulting 2020-2023.

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License

Discover more from Anekanta®Consulting and Anekanta®AI

Subscribe now to keep reading and get access to the full archive.

Continue Reading