In conversation with the BSI about Responsible AI governance and the new AI management system standard ISO/IEC 42001

With AI Systems, complex value chains may be involved, there may be a mix of closed and open source. AI and non-AI Systems interact in the physical world.

The need for organised and controlled management of the risk and impacts of AI Systems is non-negotiable.

We caught up with our founder, Pauline Norstrom after her conversation with the BSI on the new AI Management System standard ISO/IEC 42001.

“It was a pleasure to discuss the new AI Management System standard, ISO/IEC 42001 with Matthew Chiles on the BSI TheStandardsShow this month!

As an industrialist, with decades of experience in the specification and deployment of technology in vast integrated infrastructure systems around the world, I know what importance the value chain ecosystems place on compliance with standards. I have also been involved in setting standards and guidance for the use of AI based technology for over 20 years.

Boards of organizations developing and using AI systems should consider this ISO/IEC 42001 standard as the ‘playbook’ for setting Responsible AI Policy from the ‘top down’, which is followed through with cross-functional organizational adoption, defined roles, responsibilites and accountability, risk management, review and mitigation and so on.

The standard is ‘certifiable’ and it could be argued is set to become the ‘golden thread’ of operationalized trust and confidence which underpins all Western world pending regulation, in particular the EU AI Act (Regulation (EU) 2024/1689).

I encourage you to listen in to the conversation.”

Pauline Norstrom, founder Anekanta®Consulting and Anekanta®AI
Join the conversation between Matthew Chiles and our founder Pauline Norstrom

For information about our services please explore the Anekanta® web site and get in touch with any questions.

Anekanta®AI Logo

Anekanta®AI and Anekanta®Consulting
AI Strategy | Risk | Literacy | Governance
Contact us | Explore our services | Subscribe to our newsletter | Follow us on LinkedIn

Intellectual Property: © 2016–2026 Anekanta®. All rights reserved. Unless otherwise expressly stated, all materials published on this website, including the Anekanta® AI Governance Framework for Boards, the 12 Principles, and all AI risk and impact evaluation methodologies, software, models, diagrams, text and materials, are proprietary intellectual property of Anekanta®. No reproduction, adaptation, distribution, or commercial exploitation is permitted without prior written authorisation. No rights are granted other than those expressly stated. The Anekanta® AI Governance Framework and 12 Principles are developed, maintained and continuously enhanced as part of Anekanta®’s proprietary governance architecture.
Professional Disclaimer: The information provided on this website is for general informational purposes only and does not constitute legal, regulatory, financial or professional advice. Any reliance placed on the information is strictly at the user’s own risk. Professional advice should be sought in relation to specific circumstances through a formal engagement with Anekanta®.
Use of Generative AI: Generative AI tools may be utilised in research and drafting processes. All published materials are subject to substantive human review, professional judgment and oversight prior to release.

Further information
Explore our ISO/IEC 42001 pre-compliance and preparation services
Read our article about the complexity of AI risk