AI in Employment: Navigating Regulations and Anti-Bias Standards

Responsible AI in Employment

An analysis of the UK’s Information Commissioner (ICO) time and attendance ruling

Introduction

At Anekanta®, we believe in the power of Artificial Intelligence (AI) to revolutionise businesses. However, responsible implementation is key. Here, we analyse a recent ruling by the Information Commissioner’s Office (ICO) in the UK, highlighting the importance of responsible AI practices. Moreso when dealing with biometric data, especially in light of the EU AI Act (Regulation (EU) 2024/1689) and emerging standards like BS 9347:2023. Furthermore, we will also explore the growing focus on anti-bias in AI, particularly within the US employment landscape.

The Case: Employee time and attendance monitoring

The ICO recently issued enforcement notices against Serco Leisure and several associated trusts for their use of facial recognition technology (FRT) and fingerprint scanning for employee attendance monitoring and payment. The investigation revealed:

  • Unlawful Processing: Over 2,000 employees at 38 leisure facilities had their biometric data collected without demonstrating a clear necessity or proportionality, in violation of the Data Protection Act 2018 (UK GDPR).
  • Lack of Alternatives: Less intrusive methods like ID cards or fobs were not offered as alternatives.
  • Power Imbalance: Employees felt pressured to submit due to the employer-employee power dynamic.

The Outcome: A call for Responsible AI practices

The ICO mandated Serco to stop processing employee biometric data and destroy non-essential information. This case underscores the critical need for organisations, dependent upon their location, to consider a number of factors pre-and post deployment. Examples include:

  • Risk Assessments: Conduct thorough risk assessments to identify when and where it will be used and for what purpose, the applicable regulations and local ordinances, and mitigate potential biases and inaccuracies, particularly regarding race, gender, ethnicity, accessibility and facial difference.
  • Prioritise Data Privacy: Balance business needs with employee privacy rights, adhering to UK GDPR, the EU AI Act regulations, and evolving data privacy and AI cybersecurity standards.
  • Transparency and Consent: Be transparent about the exposure of the employee to an AI system, inform then what data is collected, how and where it is processed securely and who has access to it, and ensure clear, freely given consent is obtained in accordance with the law, providing opt-out without sanction.
  • Least Intrusive Means: Utilise the least intrusive methods for achieving the desired outcome.

The rise of anti-bias in AI

The Serco case highlights the growing global focus on responsible AI, with specific emphasis on preventing bias in AI used for employment purposes. Here are some key developments:

  • USA: New York City recently enacted the nation’s first law requiring employers to audit AI tools used in hiring for potential bias. This law emphasises the need for unbiased algorithms to ensure fair hiring practices.
  • Australia: The Australian Human Rights Commission released a discussion paper exploring the risks of bias in algorithmic decision-making. This paper highlights the importance of transparency, accountability, and fairness in AI development and deployment.

A missed opportunity: guidance and standards

While the ICO ruling occurred before the official release of BS 9347, this new standard offers a framework for responsible FRT implementation. Following BS 9347 could have helped Serco mitigate risks and comply with emerging best practices.

At Anekanta®, we offer comprehensive AI risk assessments and responsible AI implementation solutions. We can help you:

  • Identify and Mitigate Risks: Analyse potential risks associated with AI deployment in compliance with UK GDPR, EU AI Act regulations, emerging standards like BS 9347, and evolving anti-bias in AI regulations.
  • Comply with Regulations: Ensure compliance with data privacy regulations and emerging legal requirements for fair AI in employment, like those in NYC.
  • Develop Responsible AI Strategies: Implement AI solutions aligned with ethical principles and that prioritise fairness.
  • Leverage Industry Guidance: Incorporate best practices from resources like BSIA’s guide and BS 9347, while staying updated on anti-bias in AI regulations like those in NYC and Australia.

Embrace the future of Responsible AI

AI can unlock immense potential, but responsible development and use is crucial. Let us guide you towards successful and ethical AI implementation that adheres to current and upcoming regulations, leverages emerging best practices, and prioritises fairness in your AI use cases. Contact us today to learn how we can help.

Anekanta®AI Logo

Anekanta®AI and Anekanta®Consulting
AI Strategy | Risk | Literacy | Governance
Contact us | Explore our services | Subscribe to our newsletter | Follow us on LinkedIn

Intellectual Property: © 2016–2026 Anekanta®. All rights reserved. Unless otherwise expressly stated, all materials published on this website, including the Anekanta® AI Governance Framework for Boards, the 12 Principles, and all AI risk and impact evaluation methodologies, software, models, diagrams, text and materials, are proprietary intellectual property of Anekanta®. No reproduction, adaptation, distribution, or commercial exploitation is permitted without prior written authorisation. No rights are granted other than those expressly stated. The Anekanta® AI Governance Framework and 12 Principles are developed, maintained and continuously enhanced as part of Anekanta®’s proprietary governance architecture.
Professional Disclaimer: The information provided on this website is for general informational purposes only and does not constitute legal, regulatory, financial or professional advice. Any reliance placed on the information is strictly at the user’s own risk. Professional advice should be sought in relation to specific circumstances through a formal engagement with Anekanta®.
Use of Generative AI: Generative AI tools may be utilised in research and drafting processes. All published materials are subject to substantive human review, professional judgment and oversight prior to release.

Further reading:

This article was co-authored with Gemini as an assistant, using prompts designed by skilled and experienced people with expertise in the subject matter. The image was created using a crafted relevant prompt. The final edit was with human oversight. This article is designed to provide a high-level overview of the topic and is not written as an academic research piece. Nothing in this article constitutes legal advice nor should it be construed as such. All content and prompts are copyright Anekanta Ltd.