AI Risk Management Through Standards
Artificial intelligence (AI) is increasingly shaping high-stakes domains such as healthcare, financial services, law enforcement, security, infrastructure and education. With its transformative potential come ethical, operational, and societal risks. To manage these, standards like ISO/IEC 42001 offer a comprehensive governance framework, while sector-specific standards provide detailed guidance for addressing high-risk applications. Anekanta®’s article uses BS 9347 Facial recognition technology – Ethical use and deployment in video surveillance-based systems – Code of practice as an example of a high-risk AI standard to illustrate how sector and AI risk-specific standards may be seamlessly integrated into the ISO/IEC 42001 framework to ensure effective governance, risk treatment, and ongoing management of AI systems. This article is illustrative and provides a non-exhaustive high-level overview. For specialised guidance on the planning and implementation of both standards contact Anekanta® directly.
The Global Context for AI Governance
AI risk management is now a global priority. Frameworks such as the EU AI Act and the OECD AI principles underscore the importance of ethical AI development and deployment. In the UK, where the development of specific AI legislation or regulation is at the early stages, standards like ISO/IEC 42001 are vital in providing a structured approach to AI governance, while sector-specific standards such as BS 9347 demonstrate how targeted controls can address specific risks. Together, they offer organisations a roadmap to ensure transparency, accountability, and fairness across diverse AI applications.
ISO/IEC 42001: The Management Platform for AI Systems
ISO/IEC 42001 is a comprehensive management system standard for AI, designed to govern multiple AI systems within an organisation. Its key elements include:
- Governance Framework: Establishes policies and controls for AI system development, deployment, and monitoring.
- Risk Management: Requires organisations to assess, categorise, mitigate and manage risks associated with their AI systems.
- Scalability: Enables oversight of diverse AI systems while addressing sector-specific challenges.
Annex C of ISO/IEC 42001 highlights potential organisational objectives and risk sources, such as data bias, operational safety, and ethical concerns. Annex D provides cross-sector implementation guidance, recognising the unique requirements of industries like healthcare, finance, and public safety.
BS 9347: An Example of a High-Risk AI Standard
BS 9347 addresses the ethical and operational complexities of deploying facial recognition technology (FRT), a high-risk AI application. While it is one example of a sector-specific standard, its principles and approach are applicable to other high-risk AI systems. BS 9347 incorporates OECD, EU HLEG and UK trustworthiness principles that align with ISO/IEC 42001, including:
- Governance and Accountability: Clear accountability structures for system operations.
- Human Agency and Oversight: Ensuring human involvement in critical impactful decisions.
- Privacy and Data Governance: Protecting sensitive biometric data.
- Technical Robustness and Safety: Ensuring system reliability and security.
- Transparency and Explainability: Making processes understandable and contestable.
- Diversity, Non-discrimination, and Fairness: Mitigating biases and ensuring equitable outcomes.
Integration of Sector-Specific Standards into ISO/IEC 42001
The integration of sector-specific standards like BS 9347 into the ISO/IEC 42001 framework demonstrates how high-risk AI systems can be effectively managed.
Key areas of alignment include:
1. Risk Assessment and Categorisation:
ISO/IEC 42001 (Section 6.1.2) mandates risk assessments and categorisation. High-risk systems like FRT will flag in this process requiring additional scrutiny. BS 9347 provides the risk controls and an operationalised management framework for the specific technology, implemented through policy requirements for the use of FRT.
2. Control Mechanisms:
ISO/IEC 42001 (Section 6.1.3) outlines tailored risk treatment plans. Annex A provides reference controls, while BS 9347 details practical safeguards by recommending two humans in the loop of the decisions and policies which ensure ethical and legal remedial action (Clause 9.2.3).
3. Alignment Table:
The table below non-exhaustively illustrates areas of alignment between ISO/IEC 42001 and BS 9347, focusing on risk management and sector-specific requirements:
| ISO/IEC 42001 Clause | Key Focus | Corresponding BS 9347 Principle | Example Implementation |
|---|---|---|---|
| 5 Leadership and 5.2 AI Policy | High level commitment and AI policy setting with clear board accountabilities | Governance and Accountability (Clause 4.2.1) | Accountability assigned to senior management. BS 9347 requires a Senior Accountable Person (SAP) |
| 6.1.2 Risk Assessment and Categorisation | Identifies and categorises risks from AI systems. | Assessing the need (Clause 5) | Needs assessment flow diagram, with recommendation not to use if risks cannot be mitigated effectively |
| 6.1.3 Risk Treatment Planning | Defines tailored controls for high-risk systems. | Defined controls throughout, e.g., Human Agency and Oversight (Clause 9.2.3) | Human-in-the-loop (HITL) safeguards added for FRT decisions in impactful identification contexts. |
| 6.1.4 Risk Source Identification | Recognises trustworthiness-related risks such as bias and data misuse. | Privacy and Data Governance (Clause 9.3.4) | Biometric data from FRT managed with heightened safeguards and routes to redress and challenge by subjects. |
| 8.1 Operational Planning and Control | Establishes operational safeguards for AI systems. | Technical Robustness and Safety (Clause 9.4) | FRT systems are periodically tested for accuracy under varying lighting and demographic conditions. |
| Annex C Risk Sources | Identifies sector-specific risk sources and the implications of data quality. | Risk sources are defined throughout, for example Transparency and Explainability (Clause 4.2.5), Diversity, Non-discrimination, and Fairness (Clause 4.2.6) | Organisations implement signage and public disclosures for FRT use in public spaces. Ensures FRT algorithms are trained on demographically representative datasets. |
| Annex D Cross-Sector Implementation | Guides sector-specific adaptations of AI controls. | The standard in its entirety enables the ethical and legal management of FRT. | The standard should be referenced in the organisation’s risk controls and used to guide policy and governance of FRT use. |
4. Risk Source Integration:
Annex C of ISO/IEC 42001 explicitly highlights data quality as a risk source. Sector-specific standards like BS 9347 address these through principles such as privacy governance and fairness.
5. Operational Safeguards:
ISO/IEC 42001 (Section 8.1) calls for operational controls. BS 9347 builds on this by ensuring lifecycle management of biometric data, including its collection, storage, and deletion (Clause 9.4).
6. Cross-Sector Implementation Guidance:
ISO/IEC 42001’s Annex D underlines the importance of cross-sector adaptability. BS 9347 serves as a model for how sector-specific standards can address unique risks.
Broader Integration Across AI Lifecycles
ISO/IEC 42001 also supports the integration of other AI lifecycle standards, such as:
- ISO/IEC 5259 (Data Quality): A suite of standards, part 1-6, ensuring the provenance, accuracy, and completeness of data.
- Bias Audits: Addressing fairness and demographic equity across AI systems.
- Trustworthiness and Transparency Standards: Embedding accountability and explainability in system design.
These integrations highlight the unique demands of AI governance, which differ significantly from traditional standards like ISO 9001 and ISO/IEC 27001, although these standards are interoperable due to following a set structure for a management system. An AI contextual approach ensures comprehensive oversight, from development to deployment.
Case Studies and Practical Applications
- Retail: A retailer implementing FRT for theft prevention can use ISO/IEC 42001 to manage all AI systems in use, and in doing so, categorise FRT as high-risk and apply BS 9347 to ensure compliance with, human-in-the-loop, data governance and transparency requirements.
- Healthcare: Diagnostic AI systems using patient data can integrate ISO/IEC 5259 for data quality and ISO/IEC 42001 for risk management, ensuring trustworthiness and ethical use.
- Financial Services: AI systems monitoring fraudulent transactions can align with ISO/IEC 42001 for oversight and implement sector-specific controls for data protection.
Challenges and Solutions
- Overlapping Standards: Organisations may struggle with overlapping requirements. ISO/IEC 42001’s modular framework simplifies integration.
- Resource Constraints: Prioritising high-risk systems for compliance ensures efficient resource allocation.
- Sector-Specific Complexity: Tailored guidelines, as seen in Annex D, and the use of sector specific standards such as BS 9347 address unique challenges.
Bringing it all together
ISO/IEC 42001 provides the overarching framework for managing AI systems, while BS 9347 serves as one example of how sector-specific standards can address high-risk applications. By aligning trustworthiness principles, these standards demonstrate how sector-specific needs can be seamlessly integrated within a unified governance platform. Furthermore, as AI adoption expands, integrating lifecycle standards like ISO/IEC 5259 into AI development activities will further enhance risk management, transparency, and accountability, ensuring ethical and effective AI deployment across sectors.
Anekanta®AI has developed a AI Risk Intelligence System for high-risk AI and biometrics which assesses all potential risks of inferences when data sources are combined or there has been a change in use of the technology, and recommends mitigations beyond any DPIA assessment. Furthermore Anekanta®’s Facial Recognition Privacy Impact Risk Assessment System provides reports which assess impact of FRT technology across multiple jurisdictions from human rights, Union laws, GDPR, EU AI Act, BIPA, NYC 144 and beyond.

For further information about how Anekanta® can help you to understand, manage and control high-risk AI get in touch.

Anekanta®AI and Anekanta®Consulting
AI Strategy | Risk | Literacy | Governance
Contact us | Explore our services | Subscribe to our newsletter | Follow us on LinkedIn
Intellectual Property: © 2016–2026 Anekanta®. All rights reserved. Unless otherwise expressly stated, all materials published on this website, including the Anekanta® AI Governance Framework for Boards, the 12 Principles, and all AI risk and impact evaluation methodologies, software, models, diagrams, text and materials, are proprietary intellectual property of Anekanta®. No reproduction, adaptation, distribution, or commercial exploitation is permitted without prior written authorisation. No rights are granted other than those expressly stated. The Anekanta® AI Governance Framework and 12 Principles are developed, maintained and continuously enhanced as part of Anekanta®’s proprietary governance architecture.
Professional Disclaimer: The information provided on this website is for general informational purposes only and does not constitute legal, regulatory, financial or professional advice. Any reliance placed on the information is strictly at the user’s own risk. Professional advice should be sought in relation to specific circumstances through a formal engagement with Anekanta®.
Use of Generative AI: Generative AI tools may be utilised in research and drafting processes. All published materials are subject to substantive human review, professional judgment and oversight prior to release.
Copyright Anekanta®Consulting and Anekanta®AI 2020-2024. All rights reserved.

1 comment
Comments are closed.