Last month, the EAIGG: Ethical AI Governance Group convened an informative and insightful debate about the different approaches to AI regulation from EU, UK and USA perspectives. Find out more from global leaders in Responsible AI.
Category: Industries
At Anekanta® we focus on maximising the opportunities and balancing the risks arising from the development and use of high-risk AI. The definition of high-risk AI is that which poses a risk to the health, safety and fundamental rights of individuals. A range of use cases are defined in the EU AI Act (Regulation (EU) 2024/1689) Annex III across sectors and domains including biometrics, critical infrastructure, education and vocational training, employment, workers’ management and access to self-employment, access to and enjoyment of essential private services and essential public services and benefits, law enforcement and administration of justice and democratic processes.
Additionally, we support a range of sectors concerned about high-risk governance requirements primarily set out in the EU AI Act Chapter III. Further to EU AI Act requirements, the implementation of the foundational principles for trustworthy AI, originated in the OECD, blended with good board governance practice are of interest to all sectors, and specifically those which process personal data and whose activities may have a biased effect on groups underrepresented in data. Furthermore, by modelling commercial existential risk, we identify the potential show stoppers which may result in a great AI application opportunity being permanently confined to the status of ‘pilot’. To find out how we can help you balance opportunity and risk for the maximum return get in touch today.
Showcasing Anekanta®’s Leadership in High-risk AI and Biometrics
Anekanta®AI have developed a number of independent, vendor agnostic ethical AI governance frameworks expertly designed to assess the safety of high-risk AI development and deployment and its impact on stakeholders. Here is the background to our work in this area.
Anekanta® recognised for ethical and responsible AI development and implementation
We are proud to announce the release of a ground breaking new British Standard "Facial recognition technology - Ethical use and deployment in video surveillance-based systems – Code of practice (BS 9347:2024)" Anekanta® Privacy Impact Risk Assessment System™, de-risks FRT use cases anywhere in the world and recommends the new standard for UK mitigation purposes.
Examining the ‘cost-benefit’ of adopting AI within your business
By adopting Responsible AI practice, businesses can unlock the transformative power of AI technology while minimizing risks and maintaining stakeholder trust. Anekanta® recommends that organisations start their AI policy and AI Management System Implementation (BS ISO/IEC 42001) alongside legal (EU AI Act) and regulatory considerations dependent on the regions where the business operates or uses AI decisions.
What does the new legal definition of AI systems mean in practice for commercial use cases?
Organisations may not be certain whether their systems are AI due to the range of contrasting views from vendors and buyers. It is important that they recognise and categorise these systems as soon as possible. Developers, providers and user/deployers of AI systems which meet the EU AI Act definition criteria, and whose decisions are utilised within the EU, have to comply with the law.
EU AI Act: Cleared to move to the next stage. Your action list to get started.
Organisations are urged to review their development road maps and current use cases to determine whether their AI systems sit in the prohibited and high risk categories
Announcement: New pre-compliance services for the AI Management System standard BS ISO/IEC 42001
Navigating the intricacies of BS ISO/IEC 42001 can be a daunting task, especially for organizations embarking on their responsible AI journey. This is where Anekanta® with our expertise and experience in AI ethics and governance, can play a pivotal role in guiding organizations towards certification success.
‘Rite Aid’ FTC Ruling : AI Facial Recognition and Biometric Software
Here we provide an overview of the Rite Aid FTC judgment from a Responsible AI perspective, and in doing so opine why the judge ruled this way. The algorithmic bias risks, which may previously have been considered hyopthetical, crystallized...Read more
EU AI Act Q & A for developers or users of high-risk AI
The European Commission have updated their Q & A following the provisional agreement of the text in December 2023. We have extracted the key points which relate to high-risk and biometric AI
The EU AI Act has been agreed – headlines you need to know about now
The trilogue negotiators reached a provisional agreement on the EU Artificial Intelligence Act on 9th December 2023. The EU AI Act will become Law in all EU Member states early 2024. To find out how to comply with the Act get in touch.
