Anekanta®AI: Comply with the EU AI Act

One of the key requirements of the EU AI Act is that organizations must assess the risks posed by their AI systems before deploying them. Anekanta®AI can help developer and user organizations to comply with the Act by providing systematic and comprehensive assessments of risk.

Anekanta®AI featured in UK Government AI Assurance Portfolio

The Anekanta® AI, AI Risk Intelligence System™ for biometric and high-risk AI supports organisations to build consistency into their governance processes. The system leads to the creation of a new comparable data set about the AI, which can be analysed to reach conclusions about the impact and risks associated with using certain types of software.

Unpacking the EU AI Act Negotiations

AI generated outputs used within the EU, whether the software resides in the EU or not, render the it subject to the Act. Whether a developer/provider or user/deployer it is time to take action.

Podcast: Talking about AI on Conxtd Thinking

We are delighted to share the Conxtd Thinking podcast which will take you on a journey into the ethos behind the Anekanta® brand, ethical AI, regulation and sector specific applications.

Updated Ethical Framework for AI Implementation

Since Anekanta® created the ethical framework for AI implementation in 2020, there have been further developments which warrant consideration. These include reference to the EU AI Act requirements.

AI may manifest in various forms – how to identify it?

Any organisation who develops, provides and uses/deploys software which meets the AI definition criteria, and whose decisions are utilised within the EU will be subject to the EU AI Act.

Anekanta® AI: impact, risk, governance and assurance.

Anekanta AI leads the brand portfolio focused on AI impact, risk, compliance and assurance. We support organisations developing, providing and using (deploying) high-risk AIs to conform to the requirements and principles of the EU AI Act in the EU and regions outside the EU.

UK GOVERNMENT DEFINES A NEW AI REGULATOR IN WAITING?

The burden of responsibility for safe AI development and use, is on the shoulders of business leaders to ensure that they have robust risk, audit, control, ethics, remedial and monitoring processes in place.

Evidence provided to the APPG AI at the House of Lords

Listen to the All Party Parliamentary Group for AI on 23rd January 2023 at the House of Lords. Gain insights into the evidence supporting the state of AI Board Governance in the UK.

Anekanta AI: impact, risk, governance and assurance.

When the Anekanta concept was created, we held a future vision and road map for the brand. We are delighted to announce that we are progressing our plan to the next stage.