Deploy AI facial recognition software safely

Anekanta® AI have developed a number of independent, vendor agnostic AI governance frameworks expertly designed to assess the safety of high-risk AI and the impact on stakeholders. Our Facial Recogniton Privacy Impact Risk Assessment System™ contains a small sub-set of our wider capability. Find out more.

Anekanta®AI featured in UK Government AI Assurance Portfolio

The Anekanta® AI, AI Risk Intelligence System™ for biometric and high-risk AI supports organisations to build consistency into their governance processes. The system leads to the creation of a new comparable data set about the AI, which can be analysed to reach conclusions about the impact and risks associated with using certain types of software.

Unpacking the EU AI Act Negotiations

AI generated outputs used within the EU, whether the software resides in the EU or not, render the it subject to the Act. Whether a developer/provider or user/deployer it is time to take action.

Anekanta® AI: impact, risk, governance and assurance.

Anekanta AI leads the brand portfolio focused on AI impact, risk, compliance and assurance. We support organisations developing, providing and using (deploying) high-risk AIs to conform to the requirements and principles of the EU AI Act in the EU and regions outside the EU.

Anekanta AI: impact, risk, governance and assurance.

When the Anekanta concept was created, we held a future vision and road map for the brand. We are delighted to announce that we are progressing our plan to the next stage.

Anekanta founder to provide evidence to the APPG AI

Anekanta's founder and CEO is set to provide evidence at the UK's APPG AI Session on AI in the Boardroom 23rd January 5.30pm House of Lords

LLM and Gen AI update

ChatGPT from Open AI can provide quite credible, human sounding answers to complex problems. We have tried it out with a range of specialised sector research questions. Find out what happened.

Our communities – ethical agenda for high-risk AI

We continually contribute to the development of the communities which place the ethical and legal risks associated with high-risk AI at the top of the agenda.

Is an ethical commercial decision a dichotomy?

Company Law, if correctly followed can lead to good commercial practice; statutes are objective, and ethics subjective. What if subjectivity rules AI?

Driving the safe and ethical use of AI through standards

If you are not certain whether you are implementing AI based technologies safely and ethically, then why are you doing it?