Anekanta®AI have developed a number of independent, vendor agnostic ethical AI governance frameworks expertly designed to assess the safety of high-risk AI development and deployment and its impact on stakeholders. Here is the background to our work in this area.
Category: Responsible AI (RAI)
What is Responsible AI? – Responsible AI (RAI) ensures that from the inception of the idea through to the deployment and use of the AI system – opportunities are realised – and harms are foreseen and mitigated – and there is a traceable system of governance, accountability and redress throughout the AI value chain. Anekanta®AI specialise in providing services and risk products which support the Responsible AI mission and process through de facto standards and regulation including BS ISO/IEC 42001 and the EU AI Act (Regulation (EU) 2024/1689).
Anekanta® recognised for ethical and responsible AI development and implementation
We are proud to announce the release of a ground breaking new British Standard "Facial recognition technology - Ethical use and deployment in video surveillance-based systems – Code of practice (BS 9347:2024)" Anekanta® Privacy Impact Risk Assessment System™, de-risks FRT use cases anywhere in the world and recommends the new standard for UK mitigation purposes.
Examining the ‘cost-benefit’ of adopting AI within your business
By adopting Responsible AI practice, businesses can unlock the transformative power of AI technology while minimizing risks and maintaining stakeholder trust. Anekanta® recommends that organisations start their AI policy and AI Management System Implementation (BS ISO/IEC 42001) alongside legal (EU AI Act) and regulatory considerations dependent on the regions where the business operates or uses AI decisions.
Anekanta® Risk and Governance Frameworks Featured by the OECD.AI Policy Observatory
Anekanta® Responsible AI frameworks have been featured in the UK Government Resonsible Technology Adoption Unit (formerly CDEI) Portfolio of AI Assurance Techniques, and now we are announcing that we are published in the OECD.AI Catalogue of Tools and Metrics for Trustworthy AI. Find out more.
Announcement: New pre-compliance services for the AI Management System standard BS ISO/IEC 42001
Navigating the intricacies of BS ISO/IEC 42001 can be a daunting task, especially for organizations embarking on their responsible AI journey. This is where Anekanta® with our expertise and experience in AI ethics and governance, can play a pivotal role in guiding organizations towards certification success.
‘Rite Aid’ FTC Ruling : AI Facial Recognition and Biometric Software
Here we provide an overview of the Rite Aid FTC judgment from a Responsible AI perspective, and in doing so opine why the judge ruled this way. The algorithmic bias risks, which may previously have been considered hyopthetical, crystallized...Read more
In conversation with the BSI about Responsible AI governance and the new AI management system standard ISO/IEC 42001
We caught up with our founder, Pauline Norstrom after her conversation with the BSI on the new AI Management System standard ISO/IEC 42001. "The need for organised and controlled management of the risk and impacts of AI Systems is non-negotiable."
How to ensure AI Trust, Risk Mitigation and Continuous Improvement
Responsible AI practice enhances reputation, profit and reduces risk. Here we explain what you can do to prepare your organization for success and in doing so align with emerging regulations.
Anekanta®AI featured in UK Government AI Assurance Portfolio
The Anekanta® AI, AI Risk Intelligence System™ for biometric and high-risk AI supports organisations to build consistency into their governance processes. The system leads to the creation of a new comparable data set about the AI, which can be analysed to reach conclusions about the impact and risks associated with using certain types of software.
Stakeholder Spotlight: CoESS Annual General Assembly
An exclusive insight into the pioneering AI stakeholder work we undertake with the physical security industry. Driving guidance, standards and regulation of high-risk AI including facial recognition software.
