Dr. Maurice Chiodo BSc (Hons), PhD Melb, MASt, PhD Camb
Principal Scientific Consultant – Ethics and Systemic Risk in Digital Technology

Dr. Maurice Chiodo is an award winning, distinguished mathematician and ethics in mathematics leader, renowned for his expertise in systemic risk inherent to mathematically-driven digital technologies. He holds two PhDs in mathematics from the University of Melbourne and the University of Cambridge and is a former Fellow and Director of Studies at King’s College, Cambridge. Prior to this, he was a Marie Skłodowska-Curie postdoctoral researcher in the faculty of mathematics at Cambridge, one of the highest-ranked postdoctoral awards in Europe. Dr. Chiodo is currently an Assistant Research Professor at the University of Cambridge Centre for the Study of Existential Risk. He is also the Founder and Principal Investigator of the Ethics in Mathematics Project, researching and teaching responsible behaviour and ethical awareness in mathematics.
Dr. Chiodo brings a unique blend of deep critical insight and commercial experience to Anekanta® where he has supported our projects, solving complex AI and digital technology design and use case problems since 2022. These include biometrics, surveillance, behavioural inference impacts, and opaque decision-making models. He also provides critical support to clients in identifying and mitigating business risk under frameworks such as the EU AI Act and the AI Management System standard ISO 42001. His recommendations include strategic improvements to the development and use of AI systems leading to commercially beneficial and societally responsible deployments.
Award Winning Innovative Teaching
Dr. Chiodo was awarded with student-led recognition for Innovative Teaching at the University of Cambridge. Furthermore, he has designed, delivered, and examined, several mathematics lecture courses in Cambridge, at the undergraduate, and masters levels. Dr. Chiodo’s innovative approach informs Anekanta®’s development workshops and AI literacy and board-level governance training, ensuring our clients gain genuine understanding and strategic foresight in AI and digital technology adoption.
Research
Dr. Chiodo has authored over 20 peer-reviewed publications, including 9 in pure mathematics, and has several ongoing preprints on ethical and safety aspects in technology. Listed in the selection of publications below, are Dr. Chiodo’s significant cutting-edge research in AI safety, demonstrating the deep expert knowledge he brings to the commercial application of his research in Anekanta®’s client engagements. We have summarised highlights from these papers together with insights into the relevance to Anekanta®’s work in AI strategy, risk, literacy and governance. References to paper numbers, such as [36], correspond to the listing of papers on Dr. Chiodo’s research page.
AI Use Case Risk and Safety Research Highlights
Human-in-the-Loop Systems: Dr. Chiodo’s novel analysis of Human-in-the-Loop systems (HITL) [36], including their technical setups, failure modes, and responsibility/liability characteristics provides new insight into the serious shortcomings of the “failsafe” of human intervention. This research features practical HITL challenges experienced with fire detection and alarm systems in Notre Dame Catherdral, which through inadequate considerations towards the human interpretation of digital system outputs, vital time was lost resulting in a catastrophic fire. Furthermore, the paper examines the legal and regulatory approach to HITL in automated decision making, Article 22 GDPR and the EU AI Act high-risk human oversight requirements. HITL is a misunderstood but vital factor in the commercial design and use of AI systems. This research contributes to Anekanta®’s approach in our risk assessments and mitigations for ISO 42001, and development and use case risk evaluations for the EU AI Act.
Algorithmic Collisions: This cutting-edge work on Algorithmic Collisions [37] characterises the significant hidden risks in AI deployments arising from complicated interactions with other AIs, especially agentic AIs. This research underlines the urgent need for transparency and explainability in AI system design to uncover the interconnected interactions between AI algorithms, particularly where there is no human-in-the-loop. While the EU AI Act partly addresses these challenges, Anekanta® analyses AI system architectures to uncover and mitigate such collisions through forensic use case evaluation and governance strategies.
Model Collapse: In this innovative ground breaking paper [33], Dr. Chiodo demonstrates his expertise in deep critical thinking and innovation, where he considers the implications of LLMs and generative AI (Gen AI) trained on their own outputs leading to model collapse. Featured in The Register the article highlights the issue through the concept that data scraped from the internet before 2022 is human generated and after that the models have been ingesting their own outputs. This research translates into Anekanta®’s services which reach into recommended design considerations which reduce the potential for future model collapse within commercial business knowledge systems and products.
Manifesto for the Responsible Development of Mathematical Works: This is a tool developed for practitioners and management [26] which embeds ethical awareness and practice into early stage mathematical technology development and lifecycles, to understand and mitigate digital system risk. The manifesto has been featured by the University of Cambridge Centre for the Study of Existential Risk (CSER) and published by The Critical Philosophical & Psychoanalytic Institute for Mathematics Education in the Journal for Theoretical & Marginal Mathematics Education. The manifesto forms a key element of Anekanta®’s development risk analysis and use case evaluations which include deep dive discovery and review of the software development lifecycle.
AI Development and Integration in Military and High-Stakes Systems: Dr. Chiodo’s work on AI Development [30] and AI Integration [34] covers the practical intricacies of developing and integrating AI tools in a high-stakes environment. The first paper explores the often-underappreciated influence AI developers have on the final product. The second explores the often-overlooked role of integrators as a pivotal, yet under-recognised, part of the AI value chain. Focusing on AI-assisted military decision-making, the first paper argues that AI developers must be properly educated in the context, use, and consequences of their technology. The second paper argues that integrators must translate between technical developers and political-military users, balancing competing priorities, ethical constraints, and system-level risks. Dr. Chiodo highlights how the fundamental role of developers shapes the trajectory of an AI product, and how the mediating role of integrators is critical to ensuring responsible and safe deployment of AI in complex sociotechnical systems – two perspectives aligned with Anekanta®’s focus on responsible AI development integration, governance, and assurance across domains where consequences are profound.
Handing Over the Keys to the City: When Governments May Inadvertently Solve One Crisis with a Bigger One: This paper, led by Dr Chiodo in conjunction with other researchers at the Centre for the Study of Existential Risk, introduces the concept of “handing over the keys to the city”, whereby political leaders facing severe crises delegate (potentially excessive) decision-making authority and/or operational control to external entities perceived as holding indispensable solutions. While intended to resolve the initial crisis, this delegation has the potential to inadvertently create larger, more intractable problems, and the desperation felt during a crisis can push leaders into Faustian pacts, trading control for perceived salvation. In this paper, Dr. Chiodo develops a conceptual framework to analyse situations where governments hand over the keys to the city, presenting typologies for different crises, the types of key that can be delegated, and the extent to which such delegation(s) can occur, and concludes by drawing out various policy implications of this. From an Anekanta® perspective, the same principles can be seen playing out in industry. Organisations, and sometimes entire sectors, increasingly hand over the keys to their digital asset base through an over-reliance on opaque vendor solutions. This often happens without the internal AI literacy or governance capabilities needed to interrogate those solutions effectively, from board oversight through to procurement and operational levels. Dr. Chiodo has a deep understanding of when, and how, such (unwarranted) handovers of power to technology and technologists might be occurring in industry.
Selection of publications
We have included range of Dr. Chiodo’s publications to illustrate the depth and breadth of his thinking and expertise when applied in academic, legal and commercial environments. A full list of Dr. Chiodo’s research works can be found on his research page.
37. The Problem of Algorithmic Collisions: Mitigating Unforeseen Risks in a Connected World , with D. Müller. Preprint at SSRN-id5269302 and at arXiv:2505.20181v1. SSRN link, ArXiv link.
36. Formalising Human-in-the-Loop: Computational Reductions, Failure Modes, and Legal-Moral Responsibility, with D. Müller, P. Siewert, J. Wetherall, Z. Yasmine, J. Burden. Preprint at arXiv:2505.10426v1. ArXiv link. Published ICLR 2026
35. Model Collapse and the Right to Uncontaminated Human-Generated Data, with J. Burden, H. Grosse Ruse-Kahn, D. Müller, L. Markschies, S. Ó hÉigeartaigh, R. Podszun, H. Zech. Harvard Journal of Law and Technology Digest, 8 March 2025. Journal link.
34. Integrators at War: Mediating in AI-assisted Resort-to-Force Decisions, with D. Müller, M. Sienknecht. Preprint at SSRN-id5094539 and at arXiv:2501.06861v1. SSRN link, ArXiv link.
33. Legal Aspects of Access to Human-Generated Data and Other Essential Inputs for AI Training, with J. Burden, H. Grosse Ruse-Kahn, D. Müller, L. Markschies, S. Ó hÉigeartaigh, R. Podszun, H. Zech. University of Cambridge Faculty of Law Research Paper, No. 35/2024. Preprint at SSRN-id5045155. SSRN link.
32. Why mathematics is not neutral, and other myths, with D. Müller. LSE Higher Education Blog, October 2024. Journal link.
31. A Field Guide to Ethics in Mathematics, with D. Müller. Notices of the American Mathematical Society, August 2024. Journal link. ArXiv link.
30. Educating AI Developers to Prevent Harmful Path Dependency in AI Resort-to-Force Decision Making, with D. Müller, M. Sienknecht. Australian Journal of International Affairs, 78(2), 210–219 (2024). Journal link.
29. Regulating AI: A Matrix for Gauging Impact and its Legal Implementation, with H. Grosse Ruse-Kahn, D. Müller, L. Ossmann-Magiera, H. Zech. University of Cambridge Faculty of Law Research Paper, No. 12/2024. Preprint at SSRN-id4765104. SSRN link.
28. Teaching Resources for Embedding Ethics in Mathematics: Exercises, Projects, and Handouts, with D. Müller. Preprint at arXiv:2310.08467v1. ArXiv link.
27. Mathematical Artifacts Have Politics: The Journey from Examples to Embedded Ethics, with D. Müller. Preprint at arXiv:2308.04871v1. ArXiv link.
26. Manifesto for the Responsible Development of Mathematical Works – A Tool for Practitioners and for Management, with D. Müller. Preprint at arXiv:2306.09131v1. Journal link. ArXiv link.
25. A useful ethics framework for mathematics teachers, with L. Rycroft-Smith, D. Müller, D. Macey. In: Ernest, P. (eds) Ethics and Mathematics Education. Advances in Mathematics Education. Springer 2024. Journal link. ArXiv link.
24. A Hippocratic Oath for mathematicians? Mapping the landscape of ethics in mathematics, with D. Müller, J. Franklin. Science and Engineering Ethics 28, 41: Article 41, 1-30, (2022). Journal link. ArXiv link.
23. Questions of Responsibility: Modelling in the Age of COVID-19, with D. Müller. SIAM news 53, No. 7, September 2020, 6-7. Journal link.
22. Unlocking Higher Education Spaces – What Might Mathematics Tell Us? Study group working paper, Turing Gateway to Mathematics, (2020). Journal link.
21. Ethics in Mathematics – why is it so hard? MAA Values Blog Post, February 2020. Journal link.
19. Teaching Ethics in Mathematics, with P. Bursill-Hall. Newsletter of the London Mathematical Society 485, 22-26, November 2019. Journal link. Republished in the Newsletter of the European Mathematical Society 114, 38-41, December 2019. Journal link.
18. The Importance of Ethics in Mathematics, with T. Clifton. Newsletter of the London Mathematical Society 484, 22-26, September 2019. Journal link. Republished in the Newsletter of the European Mathematical Society 114, 34-37, December 2019. Journal link.
Additional Commercial Insights
AI Start-up Ethics Assessments and Consultations
In addition to his commercial work with Anekanta®, between 2020-2022 Dr. Chiodo conducted ethics evaluations and onboarding consultations with over 25 AI start-up teams, including two extensive deep-dives, as part of the Digital Catapult Machine Intelligence Garage Ethics Advisory Board. The Machine Intelligence Garage was designed to help startups develop and build machine learning and artificial intelligence solutions. It gave innovators with a well-defined business idea and technical capability the access they needed to relevant expertise including investment and applied AI ethics.
Distributed Ledger Technologies (DLT)
Dr. Chiodo also created the ethics framework and guidelines for the DECaDE blockchain project; their “Responsible Innovation Toolkit”, mentioned here. This was a substantial body of work, taking over a year to complete as the ethics of blockchain technology was such a new concept. DECaDE builds upon world-leading expertise at the intersection of Artificial Intelligence (AI) and Distributed Ledger Technologies (DLT). Co-creation is driven through the Digital Catapult via both its DLT Field Labs and machine intelligence garage engaging diverse private and public sector end-users.
Engaging Anekanta®
Pauline Norstrom is the driving force behind Anekanta®, and is instrumental in scoping the client problem and devising Anekanta®’s strategic response delivered through our multi-disciplinary teams. Our services are supported by our AI Governance Framework and AI Risk Intelligence SystemTM
Anekanta® partners with boards and senior leaders on a defined project basis to embed AI into business strategy, drive innovation and capability, and strengthen positioning for growth, and market advantage – underpinned by regulatory compliance and best governance practice throughout the AI System Lifecycle.
To discuss your needs please contact us here or explore our services.

Anekanta®AI and Anekanta®Consulting
AI Strategy | Risk | Literacy | Governance
Contact us | Explore our services | Subscribe to our newsletter | Follow us on LinkedIn
Intellectual Property: © 2016–2026 Anekanta®. All rights reserved. Unless otherwise expressly stated, all materials published on this website, including the Anekanta® AI Governance Framework for Boards, the 12 Principles, and all AI risk and impact evaluation methodologies, software, models, diagrams, text and materials, are proprietary intellectual property of Anekanta®. No reproduction, adaptation, distribution, or commercial exploitation is permitted without prior written authorisation. No rights are granted other than those expressly stated. The Anekanta® AI Governance Framework and 12 Principles are developed, maintained and continuously enhanced as part of Anekanta®’s proprietary governance architecture.
Professional Disclaimer: The information provided on this website is for general informational purposes only and does not constitute legal, regulatory, financial or professional advice. Any reliance placed on the information is strictly at the user’s own risk. Professional advice should be sought in relation to specific circumstances through a formal engagement with Anekanta®.
Use of Generative AI: Generative AI tools may be utilised in research and drafting processes. All published materials are subject to substantive human review, professional judgment and oversight prior to release.
