The Anekanta® AI, AI Risk Intelligence System™ for biometric and high-risk AI supports organisations to build consistency into their governance processes. The system leads to the creation of a new comparable data set about the AI, which can be analysed to reach conclusions about the impact and risks associated with using certain types of software.
AI generated outputs used within the EU, whether the software resides in the EU or not, render the it subject to the Act. Whether a developer/provider or user/deployer it is time to take action.
Since Anekanta® created the ethical framework for AI implementation in 2020, there have been further developments which warrant consideration. These include reference to the EU AI Act requirements.
Our founder, Pauline Norstrom met with leading security industry editor Mark Rowe of Professional Security Magazine, at her company facility at Sci-Tech Daresbury UK, in Spring 2023 to discuss why now is the right time to be talking about the impact of AI and the opportunities which arise.
Anekanta AI leads the brand portfolio focused on AI impact, risk, compliance and assurance. We support organisations developing, providing and using (deploying) high-risk AIs to conform to the requirements and principles of the EU AI Act in the EU and regions outside the EU.
The burden of responsibility for safe AI development and use, is on the shoulders of business leaders to ensure that they have robust risk, audit, control, ethics, remedial and monitoring processes in place.
When the Anekanta concept was created, we held a future vision and road map for the brand. We are delighted to announce that we are progressing our plan to the next stage.
Anekanta's founder and CEO is set to provide evidence at the UK's APPG AI Session on AI in the Boardroom 23rd January 5.30pm House of Lords
We continually contribute to the development of the communities which place the ethical and legal risks associated with high-risk AI at the top of the agenda.
Our customers are seeking clarity, legal certainty and operating guardrails to enable them to make decisions about their use of AI technology. Therefore, our response to the UK Government consultation is mindful of these needs.