Lessons for Business Leaders from the Dutch AI Scandal – AI the Board and the EU AI Act

The Dutch AI Scandal Offers a Stark Lesson to Business Leaders on the Risks of Algorithmic Systems

Thousands of parents were falsely accused of fraud due to an opaque algorithmic risk model used by the Dutch tax authorities, with devastating consequences for families. This scandal not only led to the resignation of the Dutch government in 2021 but also exposed fundamental flaws in governance and oversight of AI systems, with significant implications for businesses operating under the evolving EU AI Act.

AI Governance and Human Rights: A Cautionary Tale

The root of the scandal lies in the misuse of AI to classify applicants for childcare benefits as high-risk based on biased and opaque criteria, including nationality. The algorithm used was a black box system with self-learning capabilities, making it nearly impossible for users—or even its designers—to explain or understand its decision-making process.

This resulted in a breach of multiple articles of the EU Charter of Fundamental Rights, which serves as the cornerstone for the EU AI Act.

The scandal highlights the importance of key OECD AI principles such as transparency, accountability, and human rights protection. It also demonstrates the need for the “trustworthy AI” framework advocated by the High-Level Expert Group on AI (HLEG), which emphasises robust risk management and ethical alignment.

The EU AI Act: Addressing High-Risk and Prohibited AI Systems

The Dutch childcare benefits scandal played a pivotal role in shaping the development of the EU AI Act. It highlighted the dangers of unregulated AI systems, particularly regarding transparency, accountability, and human rights protection. In response, the European Commission proposed the AI Act to establish a comprehensive regulatory framework for AI, aiming to prevent similar occurrences in the future. The Act directly addresses the risks revealed by the scandal, emphasising the need for robust governance and oversight to ensure such failures do not happen again.

Under the EU AI Act, the Dutch childcare benefits system would likely qualify as a high-risk application, given its societal impact and implications for fundamental rights. Key provisions of the Act address many of the failings seen in the Dutch case, such as:

  1. Transparency Obligations: The Act mandates clear documentation and explanation of AI decision-making processes, mitigating the risks posed by black box systems. Articles 13(1) and 52(3) emphasise transparency requirements. Article 13(1) mandates that high-risk AI systems be designed and developed for “traceability” and “explainability,” aiming to mitigate “black box” concerns. Article 52(3) mandates the inclusion of “clear, plain and adequate” information in the instructions for use of high-risk AI systems, thus promoting transparency.
  2. Human Oversight: The Dutch system lacked adequate human oversight, allowing biases embedded in the algorithm to persist unchecked. The Act’s requirements for meaningful human involvement in high-risk systems directly address this issue. Article 14(1) introduces a requirement for high-risk AI systems to have “human oversight,” which should be “appropriate to the intended purpose” of the system. This requirement directly addresses the issue of unchecked biases in algorithmic decision-making.
  3. Data Quality Standards: The use of nationality as a proxy for risk violated non-discrimination principles. The Act’s emphasis on rigorous data governance would help prevent such discriminatory practices. Article 10 focuses on data governance and quality. It mandates that high-risk AI systems be trained on “high-quality” datasets that are “relevant, representative, free of errors and complete.” This emphasis on data quality is crucial in preventing discriminatory practices like those seen in the Dutch childcare benefits scandal.
  4. Public Database Disclosure: The Act requires disclosure of high-risk AI systems on a publicly accessible EU database, ensuring transparency and enabling scrutiny of such systems by regulators, civil society, and affected individuals. Article 60 mandates the establishment of an EU database for high-risk AI systems. Article 51 requires providers to register their high-risk AI systems in this database before placing them on the market or putting them into service. This public database promotes transparency and facilitates scrutiny of high-risk AI systems.

AI Literacy: Bridging the Knowledge Gap

The Dutch case also underscores the critical need for AI literacy within public institutions. Decision-makers often lack the technical understanding necessary to oversee and interrogate algorithmic systems effectively. This knowledge gap can lead to blind reliance on AI outputs, also known as ‘automation bias’ as was evident in the Dutch tax authorities’ inability to question or adjust the flawed risk classification model.

AI literacy must become a cornerstone of AI governance. Policymakers, public servants, and businesses need training to understand the limitations, risks, and ethical considerations of AI systems. Incorporating AI literacy into governance frameworks not only enhances oversight but also empowers individuals to engage with AI in an informed and critical manner.

AI Risk and Governance Implications for Business Leaders

Just in the way businesses require a return on investment, the Dutch tax authorities were required to demonstrate the efficiency of their data-driven fraud detection methods. This included the new risk classification model. The way they could prove this efficiency was by seizing enough funds from alleged fraudsters to cover the costs of the operation. This created a perverse incentive to maximize the amount of funds seized, even if it meant that some of the accusations of fraud were incorrect.

For businesses deploying AI systems, the lessons from the Dutch scandal are clear. Effective governance structures must be in place to identify and mitigate risks, ensure compliance with regulatory frameworks, and uphold ethical standards. Businesses must also recognise that failures in AI governance can have cascading consequences—damaging reputations, inviting regulatory scrutiny, and eroding stakeholder trust.

In particular, organisations using high-risk AI systems should adopt robust impact assessment processes that consider not only technical performance but also socio-economic and human rights impacts. The Anekanta® AI Risk Intelligence System, for example, offers a holistic approach to evaluating and managing risks in high-stakes AI deployments.

Bridging the Gap: Business and Government Accountability

Image from the front of the Amnesty International report Xenophobic Machines on the Dutch childcare scandal

The scandal underscores a broader point: the challenges faced by governments in deploying AI are not dissimilar to those encountered in the private sector. Poorly governed AI can lead to financial, reputational, and legal repercussions, whether in public administration or corporate operations. Aligning with the EU AI Act and the OECD and HLEG principles ensures that AI systems are not only compliant but also ethically sound.

Moreover, businesses can learn from the Dutch case that compliance is not just a legal obligation—it is a strategic imperative. Transparency, fairness, and accountability are key to maintaining trust in AI systems, particularly in sensitive applications such as biometrics, credit scoring, or fraud detection.

Towards Responsible AI Governance

The Dutch childcare benefits scandal serves as a stark reminder of the potential harms of unregulated AI. By embedding lessons from this case into the implementation of the EU AI Act, businesses and governments alike can avoid repeating these mistakes. For organisations operating in high-risk domains, the integration of strong governance frameworks like ISO/IEC 42001, coupled with ethical risk management practices, is critical to navigating the complex AI landscape.

At Anekanta®, we recognise that effective governance starts with informed leadership. Our AI Governance Framework for Boards is designed to help organisations assess their governance exposure, align their strategies with regulatory requirements, and embed ethical principles into their AI operations. We also provide tailored training programmes for boards, equipping directors with the AI literacy needed to understand their responsibilities, interrogate AI risks effectively, and make informed decisions about the technologies shaping their organisations.

Fostering a culture of AI literacy, alongside transparency and accountability, ensures that all stakeholders—from decision-makers to end-users—are equipped to engage responsibly with AI. By combining robust governance structures with enhanced board-level understanding, we can ensure AI serves humanity responsibly, avoiding the pitfalls of the past while embracing the promise of innovation.

Further information

Didn’t know an algorithm is AI?

Under both the EU AI Act and the OECD framework, an algorithm is considered a core component of an AI system. The EU AI Act defines AI as software that, for a given set of human-defined objectives, generates outputs such as content, predictions, recommendations, or decisions influencing real or virtual environments. Similarly, the OECD recognises AI systems as machine-based systems that make predictions, recommendations, or decisions. Algorithms, whether used in isolation or as part of a broader AI system, fit these definitions when they are designed to process data and generate outcomes that impact human decision-making or behaviour. Thus, even seemingly simple algorithms, when used for automated decision-making, fall within the scope of AI regulation under these frameworks.

Sources and notes
Xenophobic Machines October 25, 2021 Index Number: EUR 35/4686/2021.
Applicable standards (non-exhaustive): ISO/IEC 42001, ISO/IEC 22989, ISO/IEC 5259

Anekanta® has used the Dutch Government case study as a foundational piece to demonstrate AI risk since the issue emerged. We are bringing it into the spotlight again as we seeing a dilution of AI risks into checkbox compliance matters. There is also ongoing confusion over what is AI, and a focus on Gen AI risks only, which can lead to ‘classical’ AI risks being left out of the governance dialogue. We are highlighting these issues for business leaders to help them maximise the use of AI in their businesses while minimising harms.

Anekanta®AI Logo

Anekanta®AI and Anekanta®Consulting
AI Strategy | Risk | Literacy | Governance
Contact us | Explore our services | Subscribe to our newsletter | Follow us on LinkedIn