We are proud to announce the release of a ground breaking new British Standard "Facial recognition technology - Ethical use and deployment in video surveillance-based systems – Code of practice (BS 9347:2024)" Anekanta® Privacy Impact Risk Assessment System™, de-risks FRT use cases anywhere in the world and recommends the new standard for UK mitigation purposes.
Category: Industries
Our expertise and focus is within industries which benefit most from the use of Artificial Intelligence based technologies such as natural language processing (NLP) video analytics and deep learning, biometrics and automated facial recognition.
For the purpose of security and safety, process optimisation and convergence, AI is being applied ethically and effectively across a number of sectors from mobility/transportation, retail, manufacturing, critical infrastructure and smart cities.
Examining the ‘cost-benefit’ of adopting Responsible AI within your business
By adopting Responsible AI practice, businesses can unlock the transformative power of AI technology while minimizing risks and maintaining stakeholder trust. Anekanta® recommends that organisations start their AI policy and AI Management System Implementation (BS ISO/IEC 42001) alongside legal (EU AI Act) and regulatory considerations dependent on the regions where the business operates or uses AI decisions.
What does the new legal definition of AI systems mean in practice for commercial use cases?
Organisations may not be certain whether their systems are AI due to the range of contrasting views from vendors and buyers. It is important that they recognise and categorise these systems as soon as possible. Developers, providers and user/deployers of AI systems which meet the EU AI Act definition criteria, and whose decisions are utilised within the EU, have to comply with the law.
Anekanta® Responsible AI frameworks featured by the OECD.AI Policy Observatory
Anekanta® Responsible AI frameworks have been featured in the UK Government Resonsible Technology Adoption Unit (formerly CDEI) Portfolio of AI Assurance Techniques, and now we are announcing that we are published in the OECD.AI Catalogue of Tools and Metrics for Trustworthy AI. Find out more.
EU AI Act: Cleared to move to the next stage. Your action list to get started.
Organisations are urged to review their development road maps and current use cases to determine whether their AI systems sit in the prohibited and high risk categories
Announcement: New pre-compliance services for the AI Management System standard BS ISO/IEC 42001
Navigating the intricacies of BS ISO/IEC 42001 can be a daunting task, especially for organizations embarking on their responsible AI journey. This is where Anekanta® with our expertise and experience in AI ethics and governance, can play a pivotal role in guiding organizations towards certification success.
‘Rite Aid’ FTC Ruling : AI Facial Recognition and Biometric Software
Here we provide an overview of the Rite Aid FTC judgment from a Responsible AI perspective, and in doing so opine why the judge ruled this way. The algorithmic bias risks, which may previously have been considered hyopthetical, crystallized...Read more
EU AI Act Q & A for developers or users of high-risk and biometric AI
The European Commission have updated their Q & A following the provisional agreement of the text in December 2023. We have extracted the key points which relate to high-risk and biometric AI
In conversation about AI Governance and ISO/IEC 42001 with the BSI
We caught up with our founder, Pauline Norstrom after her conversation with the BSI on the new AI Management System standard ISO/IEC 42001. "The need for organised and controlled management of the risk and impacts of AI Systems is non-negotiable."
The EU AI Act has been agreed – headlines you need to know about now
The trilogue negotiators reached a provisional agreement on the EU Artificial Intelligence Act on 9th December 2023. The EU AI Act will become Law in all EU Member states early 2024. To find out how to comply with the Act get in touch.