Anekanta® AI have developed a number of independent, vendor agnostic AI governance frameworks expertly designed to assess the safety of high-risk AI and the impact on stakeholders. Our Facial Recogniton Privacy Impact Risk Assessment System™ contains a small sub-set of our wider capability. Find out more.
Category: Industries
At Anekanta® we focus on maximising the opportunities and balancing the risks arising from the development and use of high-risk AI. The definition of high-risk AI is that which poses a risk to the health, safety and fundamental rights of individuals. A range of use cases are defined in the EU AI Act (Regulation (EU) 2024/1689) Annex III across sectors and domains including biometrics, critical infrastructure, education and vocational training, employment, workers’ management and access to self-employment, access to and enjoyment of essential private services and essential public services and benefits, law enforcement and administration of justice and democratic processes.
Additionally, we support a range of sectors concerned about high-risk governance requirements primarily set out in the EU AI Act Chapter III. Further to EU AI Act requirements, the implementation of the foundational principles for trustworthy AI, originated in the OECD, blended with good board governance practice are of interest to all sectors, and specifically those which process personal data and whose activities may have a biased effect on groups underrepresented in data. Furthermore, by modelling commercial existential risk, we identify the potential show stoppers which may result in a great AI application opportunity being permanently confined to the status of ‘pilot’. To find out how we can help you balance opportunity and risk for the maximum return get in touch today.
Anekanta®AI: How to comply with the EU AI Act with risk intelligence
One of the key requirements of the EU AI Act is that organizations must assess the risks posed by their AI systems before deploying them. Anekanta®AI can help developer and user organizations to comply with the Act by providing systematic and comprehensive assessments of risk.
Anekanta®AI featured in UK Government AI Assurance Portfolio
The Anekanta® AI, AI Risk Intelligence System™ for biometric and high-risk AI supports organisations to build consistency into their governance processes. The system leads to the creation of a new comparable data set about the AI, which can be analysed to reach conclusions about the impact and risks associated with using certain types of software.
Sector focus: AI in security, opportunity or threat? Evolving impacts and risks.
Earlier this year, Anekanta®Consulting founder was a guest speaker at Security Buyer Live 2023 explaining the principles of AI, and the considerations which lead to prohibitions and additional measures when using AI in the EU.
Unpacking the EU AI Act Negotiations
AI generated outputs used within the EU, whether the software resides in the EU or not, render the it subject to the Act. Whether a developer/provider or user/deployer it is time to take action.
Podcast: Talking about AI on Conxtd Thinking
We are delighted to share the Conxtd Thinking podcast which will take you on a journey into the ethos behind the Anekanta® brand, ethical AI, regulation and sector specific applications.
Updated Ethical Framework for AI Implementation
Since Anekanta® created the ethical framework for AI implementation in 2020, there have been further developments which warrant consideration. These include reference to the EU AI Act requirements.
How AI will change our lives
Our founder, Pauline Norstrom met with leading security industry editor Mark Rowe of Professional Security Magazine, at her company facility at Sci-Tech Daresbury UK, in Spring 2023 to discuss why now is the right time to be talking about the impact of AI and the opportunities which arise.
AI may manifest in various forms – how to identify it?
Any organisation who develops, provides and uses/deploys software which meets the AI definition criteria, and whose decisions are utilised within the EU will be subject to the EU AI Act.
UK Government defines a new AI Regulator in waiting?
The burden of responsibility for safe AI development and use, is on the shoulders of business leaders to ensure that they have robust risk, audit, control, ethics, remedial and monitoring processes in place.
