AI harms are not a hypothetical future state, they are happening now. Here, we examine the ‘Rite Aid’ Federal Trade Commission (FTC) ruling on the use of AI Facial Recognition Software
On December 19th 2023, the FTC ruled an order for a permanent injunction against US pharmacy chain Rite Aid prohibiting their use of facial recognition technology in their 4,900 stores for 5 years
Here we provide an overview of the judgment from a Responsible AI perspective, and in doing so opine why the judge ruled this way.
The algorithmic bias risks, which may previously have been considered hypothetical, crystallized, underlining the need for regulation, pre-implementation compliance, risk management, vendor assessment and management, and ongoing AI governance, training and accountability during the lifecycle of the use of the system.
Main issues with Rite Aid use of the facial recognition software
We have extracted the salient points relating to the selection, procurement and management of facial recognition software.
“Among other things, Rite Aid has:
a. Failed to assess, consider, or take reasonable steps to mitigate risks to consumers associated with its implementation of facial recognition technology, including risks associated with misidentification of consumers at higher rates depending on their race or gender;
b. Failed to take reasonable steps to test, assess, measure, document, or inquire about the accuracy of its facial recognition technology before deploying the technology;
c. Failed to take reasonable steps to prevent the use of low-quality images in connection with its facial recognition technology, increasing the likelihood of false-positive match alerts;
d. Failed to take reasonable steps to train or oversee employees tasked with operating facial recognition technology and interpreting and acting on match alerts; and
e. Failed to take reasonable steps, after deploying the technology, to regularly monitor or test the accuracy of the technology, including by failing to implement any procedure for tracking the rate of false positive facial recognition matches or actions taken on the basis of false positive facial recognition matches.”
“As a result of these false-positive match alerts, Rite Aid subjected consumers to surveillance, removal from stores, and emotional and reputational harm, as well as other harms.“
Section 33
Statement from the Commissioner
In his statement, FTC Commissioner Alvaro M. Bedoya made a number of key observations, highlighting the corrective actions which must be taken in the future if Rite Aid choose to use facial recognition software after the ban has expired.
“II. Today’s settlement offers a strong baseline for what an algorithmic fairness program should look like.
The attached Order has many strengths. First and foremost, it flat-out bans Rite Aid from using face surveillance for five years, and requires deletion of biometric data collected in connection with the operation of the system.
Going forward, if Rite Aid decides to take up such a system in the future, the Order:
✔ forces Rite Aid to tell people when they are enrolled in the system, and tell them how to contest their entry into that system;
✔ forces Rite Aid to tell consumers when it takes some action against them based on that enrollment that could hurt them, and tells them how to contest those actions;
✔ forces Rite Aid to answer those complaints quickly;
✔ requires robust testing, including testing for statistically significant bias on the basis of race, ethnicity, gender, sex, age, or disability – acting alone or in combination;
✔ requires detailed assessment of how inaccuracies may arise from training data, hardware issues, software issues, probe photos, and differences between training and deployment environments;
✔ requires ongoing annual testing “under conditions that materially replicate” conditions in which the system is deployed;
✔ requires that Rite Aid shut down the system if it cannot address the risks identified through this assessment and testing; and
✔ creates an opportunity for the Commission to conduct ongoing compliance monitoring.”
Summing up, the Commissioner’s closing statement:
“Algorithmic unfairness is pernicious. It hurts people invisibly and at scale. And it tends to harm people because of who they are, reifying patterns of discrimination deeply embedded in our nation’s history. Algorithmic unfairness hurts people who are already hurting. I am proud that this Commission is moving to confront it.”
FTC Commissioner Alvaro M. Bedoya
Moving forward how may retail organizations avoid repeating Rite Aid’s harms?
What is facial recognition software?
First and foremost, it is important to understand what this technology actually does. Facial Recognition technology (FRT) is a biometrics based AI System which can be used for identification in a one-to-many scenario. The AI System captures facial images from video surveillance cameras, processes and categorizes digitised biometric data of the face and autonomously performs a comparison of the facial biometric data against a database of images to infer or guess using a pre-trained model whether the face is similar, with varying degrees of certainty, to one of interest; which should have been collected lawfully and legitimately. The technology can be beneficial if it is in responsible hands, with the correct guardrails in place to ensure the entire value chain is working together to achieve the result and to eliminate harms. The Rite Aid use case is a one-to-many scenario and should not be confused with a fully consented one-to-one verification for access control, facilities management, banking authentication and so on.
What are the drivers for the use of FRT?
Many organisations are suffering severe losses and operational inefficiencies which can make the continuation of business non-viable if the losses continue. This is why organizations are taking action and using technology to reduce loss and harm to employees, however they do not always get that right which may result in the stifling of the adoption of technology. FRT may cause human rights and civil impacts in certain scenarios, however, it can also be highly beneficial if used and managed responsibly. In the Rite Aid case settlement, they have agreed to cease using FRT for a period of 5 years a move which may cause action toward the responsible use of FRT by the retail sector.
The Rite Aid case reflects all the potential issues which we imagined when we started on our journey to set out responsible AI frameworks and operationalized practices for businesses users to ensure they use AI Systems responsibly and safely.
What laws and regulations govern the use of facial recognition software?
We know through our experience and research that regulation of high-risk technologies does not stifle adoption, and without it, the supply chain and stakeholders have no clear route to remedies or a solid basis for decisions when attempting to choose one vendor over and above another. Currently there are significant gaps in national legislation across all Western democracies. There is only one example of case law in the UK which examined the lawfulness of the use of FRT by a public authority, and in doing so acknowledged the lack of specific primary legislation and suitable regulation for the specific use case. Elsewhere there is a legislative and regulatory patchwork which mostly attempts to fit the FRT use case to a range of disparate, non-domain specific Recitals and Articles in the General Data Protection Regulation.
Until the EU AI Act enters into law, which requires fundamental rights impact assessments and strict disclosure and registration of systems, there is no single rule or regulation which specifically encompasses the use of facial recognition technology in a commercial setting and which governs all one-to-many use case scenarios. There are rules, however they are set out across a range of disparate state and city ordinances which vary from state to state, and for EU and UK organizations, biometric data processing, profiling and automated decision making are included in GDPR/UK GDPR, and the pending Data Protection and Digital Information Act. The data protection laws go part way to regulate the processing of data and to require Data Protection Impact Assessments, but what is missing is the real world context of the use case and operationalized domain specific guidelines which interpret the law in an industrial actionable way. Otherwise voluntary good practice guides are derived from the thin legal frameworks, followed by standards written specifically for the purpose of ensuring the ’emergency stop button’ is designed into the process and can be activated by a designated, empowered, accountable person(s). Navigating the patchwork is challenging, and currently, only when significant and widespread harms occur, is action taken. In the Rite Aid case, this was after consumer complaints to the FTC. In other regions the data protection commissioners are more focused on system failures by the data controller than protecting individual’s privacy rights which may increase the risk of harms manifesting over time which are unaddressed. In effect, there is another ‘Rite Aid’ waiting to happen in other regions. Retailers can act now to protect their organizations from the risk of blanket bans by adopting the practices set out in local rules, and implementing the best practice available from the industrial community.
Regulation will solve governance problems because businesses usually act on the potential risks which face them, such as adverse reputational effects and the potential of punitive fines. These risks may cause the use of AI Systems to be elevated to the risk register and become subject to regular reviews by the governing body of the organisation. Furthermore, if the organization has endemic ethical issues, it is unlikely that they will act responsibly in which case the power of the law may intervene where voluntary frameworks are ignored.
The evidence in this case may support an argument that the Rite Aid use case was implemented within an environment where poor data security practice prevailed for 10 years, and the poor governance of the facial recognition system may have reflected these practices alongside a disregard for the effect on fundamental human and civil rights. It is our view that adding new technology to a zero governance environment can only lead to misuse.
How to measure the responsible use of FRT?
It may seem obvious that it is not responsible to procure AI based technology without checking whether it will provide balanced, non-biased and accurate results although this is a common occurrence often due to a lack of AI literacy across the procuring organization. It is also not responsible to commission technology without conducting ‘real-world’ low risk trials in the actual scenario to check whether its operation and accuracy is acceptable, nor is it responsible to put it into action without any training of the employees using it who are required to interpret AI decisions which are otherwise deferred to the computer (“the computer is always right”), nor to have the proper feedback loops in place to ensure there is a way of reporting bias back to the vendor, to compel the vendor to take responsibility and accountability for the performance of the system in the user application, and to cease use if harms are discovered.
Aside from the obviousness of responsibility and the need to exercise duty of care when leading a business, we understand the difficulties which arise within large organisations with thousands of employees when they adopt a new technology. To assist, we developed a system of evaluating the use case in the industry and region of operation, and using our in-house developed contextualised database, we create risk reports for potential users of the technology so they can gain an understanding of whether they can use the technology at all, and if so, what guardrails and regular reviews they must put in place to ensure that the responsible use is measured and maintained with corrective actions by empowered, accountable people set out in advance of its use.
Are there voluntary self-regulation frameworks available?
Prior to the Rite Aid case, in May 2023, the FTC published a non statutory policy statement setting out a number of minimum good governance guardrails to encourage the safe and responsible collection and management of biometric data incuding FRT. The FTC are committed to protecting consumer rights and we are committed to helping our customers achieve the safe and responsible use of beneficial AI based biometric technology aligned with the FTC’s policy.
Where there is a void in primary legislation and regulatory frameworks, the commercial world sets out its own voluntary rules of engagement to reduce supply chain risk and harms to reputation and stakeholders. Anekanta®’s founder was a lead author to the British Security Industry Association (BSIA) Ethical and Legal Guide to the use of Facial Recognition Software which was the first of its kind, leaning on the OECD principles for trustworthy AI as the underpinning framework, also signposting good governance and accountability practice. The guide contains the all important emergency stop button which recommends cessation of use if potential risks and harms cannot be mitigated.
Pursuant to the BSIA’s guidance, the British Standards Institute (BSI) agreed to support the creation of a new standard, BS 9347:2023 Facial recognition technology – Ethical use and deployment in video surveillance-based systems – Code of practice, through a committee chaired by the BSIA and comprising the best talent across the entire international facial recognition value chain from developers, systems integrators, end users, responsible AI practitioners, regulators, certification bodies, law enforcement stakeholders and government departments.
We recommend the implementation of the standard when released early 2024, in our Anekanta® Facial Recognition Privacy Impact Risk Assessment Report™ mitigations due to its whole value chain and system lifecycle approach to mitigating risk and harms.
Our founder also plays an active role in the Security Industry Association AI Advisory Board and regularly provides voluntary input and guidance around the use of high-risk AI technologies including FRT.
For a company wide AI management system which underpins EU AI Act compliance, and is governed through a formal AI policy set by the board, a new standard ISO/IEC 42001 has been launched which addresses all aspects of the context of use of an AI System, the operational environment, risk assessment and management together with measurement and monitoring controls. Any organisation using AI Systems should consider implementing this standard as a baseline, and to bring into the AI management system, the domain specific operational use case standards for each high-risk AI System such as FRT.
Operationalized risk management frameworks
We anticipated that it would only be a matter of time before there was another large scale misuse of facial recognition and biometric software. Our predictions were based on the fact that there is no national regulation which sets minimum guardrails around the selection, use and governance of such systems. Other risk case studies have already shone a light on what can go wrong without AI governance, for instance, the ClearView AI and Dutch Government case studies.
The EU AI Act (Act) takes substantial strides towards the regulation of AI systems which automate biometric based decision making, and in response to the new regulation requirements which bridge the gap between the law and commercial operationalization, we work with organizations around the world to audit, categorize and provide pre-compliance support and strategies which enable these organizations to plan and implement their AI governance policies and strategies effectively. FRT regulation is a central feature of the Act, imposing strict measures for law enforcement use, and is becoming a hot topic in all regions where the software is seen as beneficial to commercial loss prevention and frictionless facilities management, however users want to do the right thing and understand what they need to do to ensure a safe and responsible implementation.
Anekanta®’s services are underpinned by our Anekanta® AI Risk Intelligence System™ which assesses the functions and impacts of biometric and high-risk AI, resulting in our team confirming the risk categorizations and mitigations and producing an actionable report for management and business operations. Our system has been featured in the UK Government, Centre for Data, Ethics and Innovation (CDEI) Portfolio of AI Assurance Techniques.
Additionally our Anekanta® Facial Recognition Privacy Impact Risk Assessment Report™ provides high-level facial recognition risk assessment reports based on the use case and prevailing regional regulations and ordinances, steering the buyer towards the framework of risk mitigations they need to implement pre-deployment and during the entire AI system life cycle.
For further information about Anekanta®’s services please get in touch.

Anekanta®AI and Anekanta®Consulting
AI Strategy | Risk | Literacy | Governance
Contact us | Explore our services | Subscribe to our newsletter | Follow us on LinkedIn
Intellectual Property: © 2016–2026 Anekanta®. All rights reserved. Unless otherwise expressly stated, all materials published on this website, including the Anekanta® AI Governance Framework for Boards, the 12 Principles, and all AI risk and impact evaluation methodologies, software, models, diagrams, text and materials, are proprietary intellectual property of Anekanta®. No reproduction, adaptation, distribution, or commercial exploitation is permitted without prior written authorisation. No rights are granted other than those expressly stated. The Anekanta® AI Governance Framework and 12 Principles are developed, maintained and continuously enhanced as part of Anekanta®’s proprietary governance architecture.
Professional Disclaimer: The information provided on this website is for general informational purposes only and does not constitute legal, regulatory, financial or professional advice. Any reliance placed on the information is strictly at the user’s own risk. Professional advice should be sought in relation to specific circumstances through a formal engagement with Anekanta®.
Use of Generative AI: Generative AI tools may be utilised in research and drafting processes. All published materials are subject to substantive human review, professional judgment and oversight prior to release.



