Anekanta®’s Leadership in High-risk AI and Biometrics
Anekanta® AI have developed a number of independent, vendor agnostic ethical AI governance frameworks expertly designed to assess the safety of high-risk AI development and deployment and its impact on stakeholders.
Our frameworks were developed by specialists in law, ethics and responsible AI, using recognised global ethical, trustworthy and responsible AI principles from the OECD which underpin all pending AI regulation and standards and contextualised to the specific use cases for high-risk AI, biometrics and facial recognition technology. Emerging AI standards include ISO/IEC 42001, ISO/IEC 22989 (free issue), BS 9347 and ISO/IEC 23894 in addition to a range of AI specific performance, transparency and explainability standards.
Responsible AI Governance Framework for Boards
Anekanta®’s Responsible AI Governance Framework (“RAI framework”) was originally developed in 2020 for the purpose of an ethical framework* prior to the emergence of Generative AI. This updated RAI framework (2024) provides Boards with signposts to high level areas of responsibility and accountability through a checklist of twelve principles which could be at the heart of an AI governance policy. It is for Boards who wish to start their AI journey, or for those who recognize that AI governance may be mission critical since the emergence of generative AI (Gen AI) and large language models (LLMs).
AI Risk Intelligence System™ for biometrics and high-risk AI
AI evaluation system featured in the UK Government Portfolio of AI Assurance Techniques and for the discovery process for the EU AI Act and world regulations. We assess AIs in use which collect and make decisions about biometric data. We score the risk determined from the data we collect, against legal frameworks and trustworthy principles. We assess the impact on fundamental rights and company reputation, and recommend mitigating actions including legal compliance, deployment of an AI management system (ISO/IEC 42001:2023, agree metrics to measure AI impact with regular reviews and so on.
Featured in the UK Government Portfolio of AI Assurance Techniques and the OECD Catalogue of Tools and Metrics for Trustworthy AI.
Facial Recogniton Privacy Impact Risk Assessment System™
Specifically designed to address the needs of commercial organisations who are scoping the potential use of facial recognition technology to solve safety and commercial loss problems.
Our system extends into over 70% of the world’s regions and national requirements for the use of facial biometric processing software, including UK (BS 9347) EU, USA, Canada, APAC and CPTPP Countries.
BS 9347 Facial recognition technology – Ethical use and deployment in video surveillance based systems – Code of Practice
Additionally, a new facial recognition standard BS 9347, has been developed to UK regulatory and international trustworthy principles derived from the OECD Principles for Trustworthy AI by leading experts in the field of high-risk biometric based AI technology, including the founder of Anekanta®AI and Anekanta®Consulting.
Ethical and legal guide to facial recognition software (British Security Industry Association)
Our founder, Pauline Norstrom who was a lead author in the creation of the BSIA ethical and legal guide to facial recognition software (also published by the Home Office) and a key member of the BSI committee which authored the new standard.
EU AI Act Compliance Services
Facial recognition technology when used in video surveillance is categorised as remote biometric identification under the new EU AI Act (‘the Act’). The entire value chain must meet the requirements of the Act which include the obligations set out for developers/providers and deployers of the technology. Law enforcement use is highly restricted without specific safeguards in place. Commercial use sits within the high-risk category, still with safeguards, however the requirements include a range of measures including compliance with GDPR, transparency and explainability in technical documentation, conformity assessments and CE marking, logging and post-market monitoring. Many of these requirements should already be rooted in your AI policies for the management of AI systems and facial recognition technology within or by your business.
Anekanta® works with its global customer base to help them to prepare for the provisions of the Act coming into application for their systems in the near future.
What are the next steps?
You can apply for your facial recognition risk report by following this link, or for a deeper conversation about your Responsible AI, ethics, governance, risk and compliance needs contact us directly through our contact form or email below.
Anekanta®AI and Anekanta®Consulting
AI Strategy | Risk | Literacy | Governance
Contact us | Explore our services | Subscribe to our newsletter | Follow us on LinkedIn
Intellectual Property: © 2016–2026 Anekanta®. All rights reserved. Unless otherwise expressly stated, all materials published on this website, including the Anekanta® AI Governance Framework for Boards, the 12 Principles, and all AI risk and impact evaluation methodologies, software, models, diagrams, text and materials, are proprietary intellectual property of Anekanta®. No reproduction, adaptation, distribution, or commercial exploitation is permitted without prior written authorisation. No rights are granted other than those expressly stated. The Anekanta® AI Governance Framework and 12 Principles are developed, maintained and continuously enhanced as part of Anekanta®’s proprietary governance architecture.
Professional Disclaimer: The information provided on this website is for general informational purposes only and does not constitute legal, regulatory, financial or professional advice. Any reliance placed on the information is strictly at the user’s own risk. Professional advice should be sought in relation to specific circumstances through a formal engagement with Anekanta®.
Use of Generative AI: Generative AI tools may be utilised in research and drafting processes. All published materials are subject to substantive human review, professional judgment and oversight prior to release.
