AI Governance Framework for Boards

Board meeting discussion

Why Your Board Should Drive AI Policy

In an era where artificial intelligence is transforming industries, the boardroom plays a pivotal role in steering its responsible adoption. By embedding AI policy into strategic decision-making, boards and senior management can ensure alignment with ethical standards, regulatory requirements, and organisational values.

Do you know whether AI is driving your board’s decisions?

Leadership at the board and senior management level fosters a culture of accountability, mitigates risks, and unlocks AI’s potential to deliver long-term value. Proactive governance is not just a safeguard—it is a competitive advantage in the age of intelligent systems.

Our AI governance framework, which we first published in 2020, and continually updated with the latest developments by our experts, provides boards and senior management with signposts to high level areas of responsibility and accountability through twelve principles at the heart of AI policy. It is for non-technical leaders who wish to start their AI journey, or for those who recognise that AI governance may be mission critical since the emergence of generative AI (Gen AI).

The copyright of Anekanta®‘s AI Governance Framework belongs to Anekanta® and therefore the framework should not be copied or reproduced without the permission of the copyright owner. It can be used if the terms of the creative commons licence CC BY-NC-SA 4.0 are adhered to. This permits businesses to embed the framework within their governance processes and operations. It is not for commercialisation. Any commercialisation of this framework is not permitted and will breach the terms of the licence. Please contact us if you would like to discuss your needs.

Contents

The twelve points contained in our AI Governance framework represent a selection of the high-level areas of consideration a Board may include in its AI policy. Note that additional legal requirements must be met by organisations providing or deploying AI systems or their decisions within the European Union.

Anekanta®‘s AI Governance framework is not an exhaustive list of actions, instead it provides a start point. Full implementation of Responsible AI within an organisation requires the engagement of all Responsible AI principles and the involvement of the entire organisation and its stakeholders (e.g., supply chain and customers), including all the functions of the business which are developing or using AI systems

OECD.AI Policy Observatory
Featured in the Catalogue of Tools & Metrics for Trustworthy AI

Anekanta®‘s AI Governance framework principles originate from OECD principles which have become embedded in one form or another in every trustworthy guide, standard and pending regulation in the Western democracies.

Due consideration should be given to Responsible AI at every stage of the AI system lifecycle because development and use may have a wide reaching impact on fundamental rights. Under the EU AI Act, impact will be traceable through the supply chain back to source.

Additionally, a new standard ISO/IEC 42001 AI Management System has been published and can be implemented now. This is a certifiable management system for AI policy setting, risk management and continual monitoring, control and governance. It will also contribute to the preparations for the harmonised standards planned to meet the EU AI Act high-risk requirements for example prEN18286 QMS for Article 17.

The Framework

The Board is accountable both legally and ethically for the positive use of AI systems within the organisation. The Board should create policy, adopt Responsible AI principles and rigorously monitor them. Furthermore it should ensure that the use of AI is safe, transparent and does not have a negative effect on the wellbeing of employees and stakeholders. The Board should prevent the introduction of bias into automated processes and decisions and where bias already exists in the underlying culture of the organisation, the Board should be aware of such to prevent the AI system from repeating biased decision mistakes from the past, in the future.

The Board should hold the final veto on the implementation and use of AI in the organisation or delegate the responsibility to a named accountable person within the organisation.

It is recommended that the Board, first and foremost, considers the context of the use of AI within their organisation by taking steps to audit all AI systems in development, in use or planned to be used. The audit of AI systems should be documented within the AI policy and accompanying management frameworks.

The AI Governance framework must be audited on a regular basis and performance measurement undertaken to ensure that the actual use of AI systems remains aligned with the AI policy goals. Such audits and measurement should be documented and reviewed by the Board.

The new AI Management System standard ISO/IEC 42001:2023 is a standalone management system for the development and use of AI systems and is anticipated to become the de facto standard for Responsible AI management globally and Boards are urged to consider implementing it as soon as possible.

To avoid group-think and biased decisions, there should be regular independent reviews of the development and use of AI systems by suitably experienced and qualified persons or 3rd party organisations free of conflicts of interest, to encourage independent audit and reporting without interference. If an internal IRC is formed it should include employee representatives from more than one function of the business if the business is large enough, to oversee AI system proposals and implementations. Small and micro businesses may have roles which cross multiple functions therefore it is important that all decisions made about the responsible use of the AI system are documented and available for audit. It is recommended that organisations of any size seek 3rd party validation of internal policies. AI system proposals should at least include a risk assessment, impact assessment and evaluation of the transparency of its use also explainability of the AI system decision making process. AI system model cards may provide sufficient information which describes how the model was trained, also how it reaches its decisions.

The IRC must recommend to the Board whether the AI system development or implementation may have a beneficial effect without negative impact. The IRC must examine the risk of bias becoming embodied in the AI system and propose measures to the Board which prevent and remove bias.

High level objectives and goals for the development and use of AI systems in the organisation must be created in line with its vision, mission and values. The AI system should benefit of the organisation, its employees and customers.
Examples of high-level goals include:
a) augmenting human tasks
b) enabling better faster human decisions
c) preventing bias
d) not causing harm to any employee or stakeholder

AI system data sources should be identified and documented where it is appropriate to do so e.g., by the developing organisation. There must be a policy and method for detecting and reporting bias. If bias is discovered, action must be taken to identify the source, discover why it is there and remove it from the AI system. The data sources must be preserved and any adjustments documented for traceability. KPIs may be implemented to keep the bias out of the organisation and out of its data sets. New standards are available which guide the data journey, furthermore, we cover data sources extensively in our workshops, ask us for more details.

To determine the potential impact of AI systems, fundamental rights impact assessments may be required in the EU under the EU AI Act prior to deploying the system.

Impact assessments should consider the possible negative effects and outcomes for employees whom interact with the AI system or whose jobs may be affected. The approach to the change of any job purpose must be in line with the organisation’s policy and compliant with the relevant employment, privacy and equality laws. The employees must be given sufficient information about the potential use of the AI system and the opportunity to provide feedback to the Board via the IRC. The employees must be advised of processes being undertaken by AI systems e.g., they must be informed and aware when they are interacting with an HR chatbot.

Anekanta® is specialised in risk and impact assessments for high-risk AI use cases. We evaluate all aspects of risk and impact from bias to cyber security to quantum computing implications.

The education of employees must be taken into account and programmes must be implemented in order to explain the purpose, to train on their use and to inform them of the AI Governance Framework created and implemented in the company in order to prevent bias. The employees must be trained to monitor and report bias. New skills should be acquired which contribute to the management and fine tuning of LLMs for example prompt engineering. Prompt engineering is a discipline which can reduce biased outputs through the use of carefully curated prompts followed by continual adjustments to the model weights if accessible.

Bias can manifest through the use of an AI system because the user interface allows biased searches or decisions arising from AI system recommendations. It should be recognised that users of the system may introduce bias and this risk should be included in the risk assessments. AI literacy* skills and competency should be monitored and reviewed on a regular basis to ensure that the system is being used correctly and the decisions are fair and explainable. The users should be advised against deferring to the AI system decisions when the impacts could affect an individual’s rights.

*Article 4 AI Literacy (EU AI Act) “requires that providers and deployers of AI systems shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training and the context the AI systems are to be used in, and considering the persons or groups of persons on whom the AI systems are to be used.” Note that recent proposals in the EU may shift the emphasis on the Commission to promote literacy. We will provide further updates if the proposal is agreed.

The AI system must be designed for privacy and where applicable, audited to ensure compliance with data privacy legislation such as GDPR the EU AI Act, employment, union and equality legislation. The technical teams responsible for implementation must be trained on the RAI framework and be empowered to challenge the AI developers to ensure that the transparency of the AI decision making is understood, and determine whether a human final decision is needed or not. The AI technical teams must report their findings to the IRC as part of the AI system proposal process.

Boards should continually monitor the evolving regulatory requirements in the regions in which their organisations operate. A number of states have created regulatory frameworks including the UK, USA and EU. More information regarding the development of legislatory frameworks can be found by following the links provided, or by contacting Anekanta® which operates across multiple jurisdictions.

The EU AI Act is the most mature regulatory regime which is in force at 1st August 2024.

Organisations should take steps to ensure they are able to comply with the requirements of the Act if they are developing, providing or deploying AI systems whose decisions are used or intended to be used in the EU.

We recommend that you contact Anekanta® to discuss the implications of the EU AI Act for your organisation. We provide specialised pre-compliance services and risk reports which evaluate current and planned developments and use cases. Through our AI Risk Intelligence System we determine and allocate AI systems to all risk categories from prohibited systems to those with high-risk biometric identification, categorisation or inference capabilities. We also categorise lower risk AI systems. Our services support the entire road map to EU AI Act compliance and ongoing monitoring.

The AI system must be cyber secure by design and stand the scrutiny of external test and certification processes such as Cyber Essentials Plus and ISO/IEC 27001:2022. Additionally, new standards are emerging which are specifically designed to mitigate the cyber security risks of AI systems throughout their system lifecycle.

The AI system must be tested prior to any organisation wide implementation to ensure compliance with the RAI framework and it is within the scope of the implementation. If the AI system is considered safe to deploy, the IRC should make recommendations to the Board to proceed with full implementation, adjust the scope, or to recommend veto if conditions are not met.

The decision to implement the AI system rests with the Board as does accountability for ongoing safe and consistent performance which does not deviate from the intended purpose.

The objectives of the AI system should be agreed by the Board and IRC at the outset and the system implemented to a documented scope which sets out the measurements which will be used to assess the quality of the decisions for example, regarding bias.

Regular reviews should be undertaken to monitor the decisions of the AI and audit them against the scope and RAI framework.

If the AI system deviates from its objectives and scope, and the AI Governance Framework in any way, the deviations must be documented, reported to the Board and IRC and corrective actions implemented within a reasonable period since discovery depending on the severity of the issue based on the risk and impact assessment. If the AI system produces unwanted or unexplained results, or if unacceptable harms emerge, the Board should instruct the organisation to cease using the AI system until the issues are resolved. This will be a legal requirement under the EU AI Act.

Your Next Steps

Our CPD certified training workshops cover everything you need to leverage AI benefits, manage risk and govern effectively through our twelve recognised principles and beyond.

Anekanta®AI Logo

Anekanta®AI and Anekanta®Consulting
AI Strategy | Risk | Literacy | Governance
Contact us | Explore our services | Subscribe to our newsletter | Follow us on LinkedIn

Original content created by Anekanta® is © Copyright 2016-2026. All rights reserved.
Nothing published by Anekanta® constitutes legal advice nor should it be construed as such. Content is produced for illustrative purposes to promote commercial services.


The Evolution of the Framework

Anekanta®’s Responsible AI Governance Framework (“RAI framework”) was originally developed in 2020 for the purpose of an ethical framework* prior to the emergence of Generative AI.

Gen AI accelerated many Boards concerns due to the rapid adoption of these tools within their organisations without any guardrails. A high profile example of such use and which sent ripples throughout the corporate community was the Samsung case, one of many, and this highlighted the potential exposure which may arise if well meaning functions in a business enlist the help of Gen AI without a full appreciation of the risk and impact that may have on intellectual property, people and their sensitive data and the protection of trade secrets.

Some Boards have taken swift action to prohibit the use of public Gen AI and LLMs by their employees. This is a logical step and is reflective of a buying community attempting to mitigate risk in the absence of regulation. However, with the emergence of use case guidance produced by companies like Anekanta® and a growing community of specialised open-source and closed models which may be run in a private cloud, and the integration of Gen AI into off the shelf business tools, confidence is growing.

It could be argued that Gen AI has completed several mini- ‘Gartner hype cycles’ and may have already passed the peak of inflated expectations, emerged from the trough of disillusionment, soon approaching the slope of enlightenment, and for some digitalised organisations, already reaching the plateau of productivity whereby tangible benefits are being enjoyed now through AI agents and chatbots. 

Notes

(Update July 2023) In 2023, the IoD published their ‘AI in the Boardroom‘ business paper for directors, which was developed by the IoD Expert Advisory Group for Science Innovation and Technology. Anekanta®’s founder is a member of the group and brought the idea that the IoD needed a AI framework and gave them permission to use the Anekanta®’s intellectual property, which has since become the foundational structure and the majority of the content behind the IoD’s document.

(February 2024) Since the emergence of Generative AI, our framework has been updated to reflect emerging risks and mitigations, also to underline the currency of the EU AI Act which has been agreed.

*independently assessed by the University of Oxford.

Additional note on ethics:
The field of ethics is vast and for the commercial community complex and difficult to operationalise. It is important to acknowledge that there is no substitute for an ethical evaluation by a specialist to model the hypothetical and current impacts which may vary dependent on the intended use case and type of technology or combinations of technologies and data sources. Ethical evaluation and Responsible AI go hand in hand, however, they should not be considered to represent the same set of considerations. An organisation may need to undertake both forms of evaluation to determine the risks and impacts arising or that could arise through the use of AI systems.

Intellectual Property Rights

This framework is available under a creative commons licence for businesses to embed within their governance processes and operations. It is not for commercialisation. Any commercialisation of this framework will breach the terms of the licence. Please contact us if you would like to discuss your needs.

Anekanta Responsible AI Governance Framework for Boards by Anekanta Ltd is licensed under CC BY-NC-SA 4.0

Copyright Anekanta®Consulting and Anekanta®AI 2020-2025. All rights reserved.