At Anekanta® we are against the use of AI which produces unacceptable impacts on individuals
As facial recognition technology becomes more sophisticated and widely used, it is essential for organizations to have a plan in place to mitigate the risks associated with its use. Anekanta® AI’s Facial Recognition Privacy Impact Risk Assessment System™ can help you identify and address these risks, ensuring that you are compliant with all applicable laws and regulations, and that you are using this powerful technology in a responsible and ethical manner.
Our system is based on recognized global principles and standards, including the pending EU AI Act. It takes into account the specific requirements of your region, nation, and locality, as well as your intended purpose for using facial recognition.
We will provide you with an independent pre-mitigation opinion and a range of specific recommendations to help you steer towards compliance and minimize the impact on the rights and freedoms of individuals. Our independent report will be tailored to your specific use case and will provide you with actionable insights into:
- Potential risk level pre-implementation
- Legislation and regulation governing facial recognition software in your region
- Local ordinances with specific requirements for your city or state
- Recommended mitigations that consider all risks, including AI governance, human rights, union law, employment rights, privacy rights, impact assessments, prohibitions, voluntary and harmonized standards, and good practice
- Potential risk level if mitigations are implemented
To get started, contact us.
Background to Anekanta®‘s work on Responsible AI in facial recognition technology
Anekanta® AI have developed a number of independent, vendor agnostic ethical AI governance frameworks expertly designed to assess the safety of high-risk AI development and deployment and its impact on stakeholders.
Our frameworks were developed by specialists in law, ethics and responsible AI, using recognised global ethical, trustworthy and responsible AI principles which underpin all pending AI regulation and standards and contextualised to the specific use cases for facial recognition technology. Emerging AI standards include ISO/IEC 42001, ISO/IEC 22989 (free issue), ISO/IEC 23894 and BS 9347 in addition to a range of AI specific performance, transparency and explainability standards.
Our Facial Recogniton Privacy Impact Risk Assessment System™
Specifically designed to address the needs of commercial organisations who are scoping the potential use of facial recognition technology to solve safety and commercial loss problems. Read about it here.

Featured in the Responsible Technology Adoption Unit in the Portfolio of AI Assurance Techniques September 2024, read our case study here.

BS 9347 Facial recognition technology – Ethical use and deployment in video surveillance based systems – Code of Practice

Additionally, a new facial recognition standard BS 9347, has been developed to UK regulatory and international trustworthy principles derived from the OECD Principles for Trustworthy AI by leading experts in the field of high-risk biometric based AI technology, including the founder of Anekanta®AI and Anekanta®Consulting.
Ethical and legal guide to facial recognition software (British Security Industry Association)
Our founder, Pauline Norstrom who was a lead author in the creation of the BSIA ethical and legal guide to facial recognition software (also published by the Home Office) and a key member of the BSI committee which authored the new standard.

Our system extends into over 70% of the world’s regions and national requirements for the use of facial biometric processing software, including EU, USA, Canada, APAC and CPTPP Countries.
Our Ethical Framework for AI Implementation, AI Risk Intelligence System™ for biometrics and high-risk AI
Featured in the UK Government Portfolio of AI Assurance Techniques and the OECD Catalogue of Tools and Metrics for Trustworthy AI.

EU AI Act Compliance Services
Facial recognition technology when used in video surveillance is categorised as remote biometric identification under the new EU AI Act (‘the Act’). The entire value chain must meet the requirements of the Act which include the obligations set out for developers/providers and deployers of the technology. Law enforcement use is highly restricted without specific safeguards in place. Commercial use sits within the high-risk category, still with safeguards, however the requirements include a range of measures including compliance with GDPR, transparency and explainability in technical documentation, conformity assessments and CE marking, logging and post-market monitoring. Many of these requirements should already be rooted in your AI policies for the management of AI systems and facial recognition technology within or by your business.
Anekanta® works with its global customer base to help them to prepare for the provisions of the Act coming into application for their systems in the near future.
What are the next steps?
For a conversation about your Responsible AI, ethics, governance, risk and compliance needs contact us directly through our contact form or email below.

Anekanta®AI and Anekanta®Consulting
AI Strategy | Risk | Literacy | Governance
Contact us | Explore our services | Subscribe to our newsletter | Follow us on LinkedIn
Original content created by Anekanta® is © Copyright 2016-2025. All rights reserved.
Nothing published by Anekanta® constitutes legal advice nor should it be construed as such. Content is produced for illustrative purposes to promote commercial services.

