EU AI Act: Risk Assessments

Anekanta® AI Risk Intelligence System™

Designed for EU AI Act, UK and US high-risk AI regulation compliance

OECD.AI Policy Observatory
Featured in the Catalogue of Tools & Metrics for Trustworthy AI

Anekanta® AI’s AI Risk Intelligence System™ is a specialised discovery and analytical framework which challenges the developer and user’s understanding of AI system impact through detailed forensic questioning; whether they have developed, procured or plan to integrate AI into their operations.

Our system, which was exclusively developed in-house by Anekanta®AI, helps organisations to mitigate AI impact and risk and doing so fulfill the OECD principles for trustworthy AI, UK Government AI principles and the trustworthy principles at the heart of the EU AI Act and emerging legislation in the USA.

Through our process, we discover the AI’s level of autonomy, the origin of the inputs, the expected and sometimes unexpected outputs also the effects and impacts of the AI system.

Uncovering Hidden AI Risks That Others Miss

AI systems, whether learning models that adapt over time or fixed models designed for specific tasks, create risks that are often misunderstood, downplayed, or overlooked.

The problem is not just in how AI systems are built, but in how they operate in the real world. AI systems, even those that do not learn or adapt after deployment, can:

  • Infer patterns from training data that embed bias or discrimination (e.g., biometrics and facial recognition)
  • Draw unintended correlations that influence outcomes (e.g., credit scoring, risk profiling)
  • Interact with other systems in ways that create complex, unpredictable effects (e.g., AI-driven decision chains)
  • Affect individuals and communities in ways that are difficult to anticipate or control

These risks arise from the core characteristics of AI systems—as identified in the European Commission’s research* Complexity, Opacity, Openness Autonomy, Predictability, Data driven and Vulnerability.

Whether an AI system is learning or fixed, the potential for harm, liability, and regulatory breach remains. Yet too often, these risks are dismissed by technical teams as theoretical or improbable – hidden behind a layer of data science jargon that fails to acknowledge the real-world consequences of AI deployment.

This is the AI risk blind spot—and it is exactly what the Anekanta® AI Risk Intelligence System™ is designed to uncover.

The Anekanta® Solution: Developed for the EU AI Act

The Anekanta® AI Risk Intelligence System™ is a specialist discovery and analysis framework created late 2022 prior to the EU AI Act’s completion, to help organisations navigate complex AI risks and later updated to meet the full requirements of the EU AI Act.

It was developed specifically for risk categorisation, impact assessment, and preparation for conformity assessments under Article 43 of the EU AI Act and helping organisations move AI systems from prohibited to permitted categories through evidence-based risk management.

The system applies a structured, 150+ point critical questioning framework to examine:

  • How AI systems infer, evolve, and make decisions—whether through learning, pattern recognition, or static rules
  • How data sources combine and shift risk profiles
  • The outputs, actions, and downstream impacts of AI systems in real-world use
  • The adequacy of governance, accountability, and oversight
  • Alignment with legal and regulatory standards including the EU AI Act, UK AI White Paper, OECD AI Principles, and GDPR

This is not a superficial checklist or an automated tool. It is a human-led, forensic process that combines desktop reviews of technical documentation with hands-on evaluations of AI systems in context—delivering a comprehensive risk and impact assessment with clear, actionable recommendations.

Why Anekanta®?

The Anekanta® AI Risk Intelligence System™ is distinct because it is:

  • Human-Led and Forensic:
    Developed and delivered by experts, the system applies critical thinking and structured analysis to uncover hidden AI risks that technical scans and checklists miss.
  • Designed for High-Risk AI Systems:
    Created specifically for organisations operating in sectors where AI has real-world impact—biometrics, security, critical infrastructure, healthcare, finance, and public services.
  • Focused on Inference Risk and Emergent Behaviours:
    Addresses the blind spots in AI risk management by exposing how systems—whether learning or fixed—can infer, correlate, and influence outcomes in unintended ways.
  • Aligned with the EU AI Act and Global Standards:
    Built for risk categorisation, impact assessment, and preparation for conformity assessments under Article 43, while supporting compliance with the UK AI White Paper, OECD AI Principles, and AI standards like ISO/IEC 23894, ISO/IEC 42001 and ISO 42005.
  • A Bridge Between Technical Teams and the Boardroom:
    Provides boards and leadership teams with the insight they need to make informed, defensible decisions about AI deployment—transforming AI risk management from a technical issue to a strategic capability.
  • Commercially Focused:
    Links AI risk to business outcomes—reducing uncertainty, protecting brand reputation, enabling market access, and building stakeholder trust.

How the AI Risk Intelligence System™ Compares

AspectAnekanta® AI Risk Intelligence System™NIST AI RMFISO/IEC 23894
Depth of AnalysisForensic, 150+ point critical questioning frameworkHigh-level principles and considerationsHigh-level risk management principles
Focus on Inference RiskExplicitly identifies and addresses emergent risksAcknowledged in theory but not operationalisedRecognised but lacks detailed methodology
Alignment with EU AI ActDesigned for EU AI Act risk categorisation and conformity assessmentsGeneral principles, not specific to EU AI ActNot tailored to specific regulations
Human OversightHuman-led, contextual analysisEncouraged but not prescriptiveEncouraged, but implementation left to organisations
Commercial RelevanceLinks AI risk to business impact, liability, and market accessFocus on governance principlesFocus on risk management integration
Operational ReadinessPractical, actionable outputsConceptual frameworkConceptual framework
Legal DefensibilityGenerates evidence for EU AI Act complianceEncouraged in principleLacks practical evidence framework

The Anekanta® Difference

We go beyond surface-level assessments to deliver:

  • A forensic, structured evaluation of high-risk and biometric AI systems in context
  • A practical risk mitigation strategy with clear actions
  • Documentation and evidence that stands up to regulatory scrutiny
  • Confidence that your AI systems are safe, compliant, and commercially viable

This is not a theoretical model, a compliance checklist, or an automated tool. It is a human-led risk intelligence system, designed for organisations that need real assurance in an evolving AI landscape.

Conformity Assessments

Our AI Risk Intelligence System™ is a vital precursor to conformity assessments in accordance with Article 43 of the EU AI Act. Our system helps providers and deployers to categorise their AI systems through the entire ex.ante and ex.post lifecycle. To learn more, visit our information page on conformity assessments.

*Contact Anekanta® for research source information

For further information about categorising your AI systems and prepare for conformity assessments please get in touch.

The EU AI Act requires conformity assessments for all high-risk systems. Our AI Risk Intelligence System™ classifies your systems and enables you to act with confidence.

Anekanta®AI Logo

Anekanta®AI and Anekanta®Consulting
AI Strategy | Risk | Literacy | Governance
Contact us | Explore our services | Subscribe to our newsletter | Follow us on LinkedIn