Critical Thinking and the EU AI Act: Avoiding the Inference Trap in Prohibited AI

The EU AI Act requires organisations to evaluate their AI systems in depth, particularly in sensitive areas like biometrics. The regulation has a staged introduction over the next two to three years. However, both the prohibitions and AI literacy requirements are now in application. Punitive measures emerge in 2025, therefore decision-makers must ask critical questions about their AI systems. This opinion piece by Anekanta®AI provides real insights into recommended actions needed to stay on the right side of the regulation.

Understanding Prohibited AI Practices

The EU AI Act explicitly bans certain AI applications, including:

  • Subliminal manipulation that could harm individuals.
  • Biometric categorisation based on sensitive characteristics (e.g., ethnicity, political beliefs).
  • Social scoring systems that may lead to unfair discrimination.
  • Predictive policing based solely on profiling without clear legal justification.

For organisations leveraging AI in high-risk sectors, particularly those using biometrics, understanding whether their system falls within these prohibitions is not always straightforward. The nuances of AI deployment—its intent, functionality, and real-world impact—determine compliance.

Key Questions for Decision Makers

To ensure compliance and avoid regulatory penalties, executives and AI governance leaders should consider at least the following:

  1. Does our data constitute biometric data?
    • Biometric data covers a range of measurements of human characteristics and these can be direct or inferred
    • How will you know if your data has become biometric data?
  2. How is biometric data being used?
    • How does your AI system categorise individuals beyond strictly necessary operational needs?
    • What risks of discrimination are there in biometric-based decision-making?
  3. Does the AI system’s function align with regulatory expectations?
    • Are any elements of the system performing functions that could be considered predictive policing or automated social scoring?
    • Has the AI system been designed in a way that indirectly influences behaviour, or is this an unintended consequence? Do you know?
  4. What safeguards exist to prevent prohibited applications?
    • Are there clear limitations on how the AI system is used in real-world contexts?
    • Is there transparency in its operation that allows for scrutiny and accountability?
    • How are operators’ AI literacy skills evaluated?
  5. What depth of evaluation has been undertaken?
    • Has the AI system been independently assessed for risk beyond basic compliance checklists and vendor claims?
    • Has sector-specific expertise been applied to understand the risks unique to the application area?

The Complexity of Inference in AI Decision-Making

One of the most overlooked aspects of AI risk assessment is the complexity of inference—how AI systems derive conclusions from data. Many governance frameworks focus solely on input data and output results but fail to account for the subtle, often opaque ways in which AI models infer characteristics about individuals or groups.

For example, a facial recognition system categorises individuals based on attributes such as race, gender, and ethnicity, which may lead to broader inferences when combined with other AI-driven analytics. The AI Act’s prohibitions on biometric categorisation extend beyond direct classifications to include indirect inferences that could result in unfair treatment or social scoring, particularly when biometric data is used in ways that impact decision-making.

This issue is compounded by the fact that many organisations deploying AI lack the depth of experience necessary to recognise these risks. Governance tools and frameworks must be operated by skilled professionals who understand how AI systems function—not just how to follow a process.

Crucially, the people using the AI, need AI literacy skills and autonomy to make decisions which are allowed to critically challenge AI.

The Human Factor: Beyond Automation Bias

While AI governance tools can provide valuable insights, trusting automation without human oversight is a major risk. Decision-makers must ensure that those responsible for compliance are not simply trained to push buttons but have the expertise to interpret results and challenge them when necessary.

This requires AI literacy that extends beyond operational training—it demands:

  • A deep understanding of how AI systems arrive at their conclusions.
  • The ability to critically assess whether an AI-generated decision aligns with ethical, legal, and strategic goals.
  • Confidence and the autonomy to question automated outputs and escalate concerns when something seems off.

Lessons from the Post Office Horizon and Dutch Child Welfare Scandals

The dangers of blind trust in technology are not new. The Post Office Horizon scandal serves as a cautionary tale, where a lack of understanding and challenge led to catastrophic consequences. The Horizon system produced errors that were misinterpreted as wrongdoing. Those administrating the system blindly trusted the computer, a behaviour fuelled by a widely reported institutional culture of ‘cover up’. The leadership, compounded the issue by being disconnected and ill informed. They failed to act. This resulted in widespread miscarriages of justice, life changing effects and long-term reputational damage.

The Dutch Child Welfare Scandal which arose from a similar set of issues, instead automated by AI, concerned racial profiling in the child welfare system, leading to unfair punitive measures without challenge. This harmed thousands of people and caused terminal damage to the Government’s credibility resulting in its resignation.

These cases highlight why critical thinking and domain expertise are essential. When decision-makers cannot interpret or challenge the outputs of an AI system, they risk perpetuating errors on a large scale—with severe legal, financial, and ethical implications.

Why There is No One-Size-Fits-All Approach

Some organisations may assume that general AI governance frameworks or compliance tools can assess risk uniformly across all applications. However, risk evaluation for AI—particularly biometrics—requires deeper sector-specific knowledge.

For example, the UK’s BS 9347 standard for the ethical use of facial recognition technology (FRT) in video surveillance originated from the British Security Industry Association’s (BSIA) ethical and legal guide to FRT and outlines best practices specific to the complexities of biometric AI. Similarly, ISO/IEC 42001 provides governance principles but requires customisation, set out in the Annexes to align with real-world implementations.

Compliance is not just about ticking regulatory boxes—it’s about ensuring ethical, legal, and responsible AI use tailored to each business model.

The Cost of Misjudging Compliance

With enforcement authorities poised to impose fines of up to €35M or 7% of global turnover, decision-makers cannot afford to rely on surface-level risk assessments. Early-stage evaluations that take an unbiased view of AI applications—considering legal, ethical, and operational factors—are critical in mitigating risks before enforcement begins.

Anekanta®’s Expertise in AI Risk and Biometrics

At Anekanta®, we have deep experience in providing informed recommendations and offering strategic insights tailored to the realities of AI deployment. Our proven extensive experience in high-risk AI, biometrics, and facial recognition good practice equips us to support organisations in:

  • Evaluating AI systems to determine if prohibitions apply.
  • Assessing biometric AI risks with sector-specific expertise.
  • Understanding how inference impacts regulatory compliance and ethical considerations.
  • Ensuring that AI literacy is embedded in governance structures to empower critical evaluation.
  • Providing clear, actionable strategies for compliance.

As enforcement of the EU AI Act progresses, proactive decision-makers must ensure they have the right expertise to navigate this evolving landscape.

The question is not just “Is our AI compliant?”, but “Have we undertaken the necessary depth of evaluation to prove compliance?”

If you are making AI choices or assessing the risks of your AI applications, now is the time to take a strategic approach to critical thinking and the EU AI Act to avoid the inference trap in prohibited AI.

Let’s discuss how Anekanta® can help you navigate complexity and move forward with confidence.

Anekanta®AI and Anekanta®Consulting
AI Strategy | Risk | Literacy | Governance
Contact us | Explore our services | Subscribe to our newsletter | Follow us on LinkedIn

Intellectual Property: © 2016–2026 Anekanta®. All rights reserved. Unless otherwise expressly stated, all materials published on this website, including the Anekanta® AI Governance Framework for Boards, the 12 Principles, and all AI risk and impact evaluation methodologies, software, models, diagrams, text and materials, are proprietary intellectual property of Anekanta®. No reproduction, adaptation, distribution, or commercial exploitation is permitted without prior written authorisation. No rights are granted other than those expressly stated. The Anekanta® AI Governance Framework and 12 Principles are developed, maintained and continuously enhanced as part of Anekanta®’s proprietary governance architecture.
Professional Disclaimer: The information provided on this website is for general informational purposes only and does not constitute legal, regulatory, financial or professional advice. Any reliance placed on the information is strictly at the user’s own risk. Professional advice should be sought in relation to specific circumstances through a formal engagement with Anekanta®.
Use of Generative AI: Generative AI tools may be utilised in research and drafting processes. All published materials are subject to substantive human review, professional judgment and oversight prior to release.

Did you know Anekanta® has made available its CPD certified ‘AI in the Boardroom: AI literacy and governance for boards and senior leaders’ course. It can now be booked as a standalone engagement, previously only exclusively available as part of project delivery. Order a course brochure here.

Exit mobile version
%%footer%%