Facial Recognition Software: Ethical and Legal Use in Security Applications
Introduction
We provide an exclusive insight into the pioneering stakeholder liaison work we undertake with the physical security industry with regard to guidance, standards and regulation of high-risk AI based technologies, for example, facial recognition software.
Our founder Pauline Norstrom, played a leading role in a volunteer capacity, in the creation of the British Security Industry Association’s (BSIA) ground-breaking guide to the legal and ethical use of facial recognition software. She guided the key issues, provided expert advice and brought critical stakeholders together to provide their views and input.
She was delighted when asked by the BSIA and the Confederation of European Security Services (CoESS), to present it, and her expertise in the area of AI and facial recognition at the Annual General Assembly held in Brussels on 14th October 2022.

Background
Anekanta Consulting’s founder is an active campaigner for the ethical use of high-risk AI based technologies. She develops mutually beneficial relationships with clients and stakeholders which foster good practice and information sharing and in turn create and improve the market, and stimulate the safe use of controversial technologies.
Alongside her business roles in board leadership and the global commercialisation of highly complex video and analytics products for security, Pauline has worked with the BSIA on a voluntary basis for over 22 years. During this time she played a significant role in the creation of guidance and best practice in this field. In recent years, the maturity of AI based technology has improved to the extent that it is credible and commercially viable to deploy analytics to support human decision making. Facial recognition software is an AI based technology, which alongside other AIs, is growing in use within the security industry for various purposes from access control to the protection of staff in retail, to locating victims of human trafficking.
Users and providers need clear guidance to help them make decisions about the risk and harms associated with deploying AI. There have been some high profile “harm” cases recently which are explained further below. Safe and ethical use can only happen if a number of areas are addressed in new ways and with new and different perspectives. We believe that businesses must look beyond the shareholder’s demands solely for a return on investment, and adopt a stakeholder approach to AI Governance.
Risk should be assessed in different ways by looking at potential impacts beyond the business itself and not wait for the harms to occur. The considerations must; eliminate harms, eliminate bias, be transparent and explain AI; also educate boards to implement ethical governance frameworks which identify the responsible and accountable officer and implement a plethora of policies and remedies which include an “emergency stop button”.
The work of our founder and of the Anekanta Consulting business, spans countries, jurisdictions and continents from UK to Europe, European Union, North America, South America, Middle East, Africa and Asia Pacific regions.
Specifically, her work with the BSIA in driving through the Guide, working with peers, joining up stakeholders and fostering dialogue with all view points in mind, has altered the perception of facial recognition technology. This has moved towards a clearer position from which businesses can make risk based decisions.
She also initiated the invitation to the BSIA from the APPG AI to present the guide as evidence, later included in a UK Government Parliamentary Brief produced by the secretariat to the APPG AI, The Big Innovation Centre.
The Guide frames good vs. unethical practice and presents an operational start point which may maxmimise benefit and reduce harms.
Since the emergence of the proposed EU AI Act, and the release of the BSIA Guide, Pauline has provided input to CoESS’s highly influential AI experts group. She brought use cases from around the world and specific feedback with regard to the proposed EU AI Act and remote biometric identification when used in security applications. The CoESS team have been instrumental in providing vital industry feedback and have recommended changes which have been acknowledged by the European Council and Parliament.
Framing the Issues
AI Facial Recognition in Security
Why is Facial Recognition in the spotlight?
There is significant scope for its misuse which may infringe upon human rights. The technology can be used in good and evil ways. We use a number of high profile examples to frame the complexity and the current gaps in legislation which we hope to see addressed in the EU AI Act and other Government and State/Regional legislation around the world. The EU AI Act is held up as the gold standard, however the approach to facial recognition software may miss out the support of key applications.
Example 1. Commercial use
Clearview AI is a facial recognition software app which can be used for identification purposes. The idea being that you can point the app (running on a mobile device) at anyone and it will identify them by their image, facial biometric face map and name.
There were a number of issues: there was no control (by the individual through consent) over who was placed on their watch lists, and the source of the data was questionable – where did they get it from?. Investigation revealed that the data was taken mainly from social media accounts; images and names from millions of public user profiles which were assumed to be free and available to use.
When challenged by the UK Information Commissioner’s Office, it was alleged that the use of the images was illegal. The people had not consented to their use in this way and when attempting to seek information about whether they were on the database, people were put through further intrusive data collection processes. The ICO fined ClearView AI £7.5m (which they have since appealed and we await the outcome) and ordered them to remove the UK citizen’s data from their database. Since, we noticed that Clearview AI appear to have taken steps to control the use of the product and to know the customer. They say they are working in the Ukraine to assist humanitarian efforts and to counter espionage.
Example 2. Law enforcement
The second set of use case examples relate to Police use, again for identification. In the UK, in the case of Ed Bridges (Liberty International) v South Wales Police, South Wales Police were found to have failed to ask the provider about testing for bias, and had not considered discrimination when compiling their watch lists, nor the impact on individuals who were not of interest to the Police.
In the USA, various police forces were banned from using facial recognition software due to discrimination which was fuelled by there not being a human-in-the-loop in the final decision process. This led to the wrongful arrests of innocent citizens.
To remedy these failures, new guidance was created, new laws are in the making and in these cases an end-to-end approach to the process of compiling watch lists and ensuring a human is in the loop to make the final decision to act. Action may involve a physical intervention or closer monitoring for security purposes in the public interest.
In Virginia, USA, where the technology was banned, the Police found that unsolved crime increased when they stopped using facial recognition technology – they are planning to re-introduce it but this time with strict controls, guidance and training of the operators.These are all really good examples of what can happen if the use is not transparent and controls are not in place.
The public fear technology if they are not aware how they are being exposed to it. Transparency around overt use is key.
Distinct Use Case Scenarios
The examples given relate to identification, the most contentious use case. There are two distinct uses each of which is subject to different laws and proposed laws. Identification will sit under the new EU AI Act as a high-risk AI. Use without specific conditions being met will be banned. Verification sits under GDPR. It can be used if certain criteria are met such as informed consent.

Guidance needed – key areas
Robust process for the needs assessment
Due to the liklihood that all use of facial recognition software was going to be banned, the BSIA created ground breaking guidance which sets out a framework for ethical and legal use, and this was published at the beginning of 2021. It covers a number of key areas, the first being the needs assessment; it encourages challenge to determine why is it needed, where will it be used, what is its purpose, then to undertake an ethical assessment (who will it harm) and a data protection impact assessment to mitigate the risks. Finally a regular review process to ensure its use is necessary and proportionate. If these criteria cannot be met, we recommend that its use is reconsidered. This is the emergency stop button. We are not afraid to say that it is not the best solution every time if the risk of harm to individuals outweighs the benefits.

Be clear about the use cases
The BSIA working group comprising members and stakeholders, set out to be clear about the use cases and in each case ensured that there is a human-in-the loop to review contentious software decisions.
There are two distinct use cases described in the Guide;
Verification – which seeks to confirm whether it is you.

Typical applications include access to secure areas, safety in dangerous situations such as fire evacuation, personal access to devices, and border control to allow the fast verification of identity to speed up the flow of queues. A clear diagram shows the way the technology works, from the image capture, finding the face, mapping the face, and obtaining consent to be held on the watch list or enrolled database. Here, if there is a match, a gate may open and there is no human in the loop. This is verified access. If there is a “maybe” match – a human checks, if no match, a human checks.
The second part of being clear about the use cases the most contentious – identification, which seeks to determine who it is.

This is when the person is not pre-enrolled (such as access control), and is not expected to be there. This is remote biometric identification as defined in the EU AI Act. Use cases include border control to identify persons of interest, private venues such as retail and stadia to control theft or anti-social behaviour at football games, law enforcement and counter terrorism both live and retrospectively post event to identify suspects. When the technology is used in this way it is very difficult for the person to consent to the use, and if covert counter terror, a possible hindrance to the investigation if they knew. Some of these uses are allowed under the EU law enforcement directive, but for general commercial use and for surveillance of the public, most uses will be banned unless specific approval in given by the authorities in the public interest. We sought to clarify the role of the human in the loop when used in this way, and in all cases, we recommend that there is a human in the loop to confirm the decision of the software. This advice is there to reduce the potential for wrongful arrest or a harmful action which infringes upon the human rights of the subject.
Ethical principles
In addition to providing a clear distinction between verification and identification, the Guide is focused on foundational ethical principles.

These are based on the OECD guidelines and the EU ethical guidelines for trustworthy AI and include; Human agency and oversight – human in the loop and the stop button; Technical robustness and safety – does it work and, is it able to match to a level of accuracy; Privacy and data governance – are GDPR requirements met and are there internal processes which document a policy and ensure it is adhered to? Diversity, non-discrimination and fairness – is the software biased and does it fail to match certain demographics, is there a way of recording this, is the developer informed and required to remedy; Societal and environmental wellbeing – is the software causing a chilling effect resulting in people unable to feel free to go about their lives without fear of surveillance?, does the software consider the environment, power consumption etc; Accountability and governance – must sit with a responsible person at the highest level of the company preferrably the board, clear policies must be in place to ensure that the provider and user maintain awareness of its use and ensure it is governed in accordance with the agreed use.
The legal framework recommended is drawn from UK GDPR/DPA 2018 and the UK’s ICO guidance on data protection impact assessments. If all criteria cannot be met, it is advised that the implementation should not go ahead.

Privacy, storage, reference database
Clear guidance is set out with regard to the management and review of the purpose of the system, with specific focus on the make up of the watchlists, the safety and security of the storage, the deletion of data as soon as possible and the required steps are in place to recommend signage, consent where possible and that its use is necessary and proportionate.
Summary
The presentation covered what could go wrong – providing key use cases we can learn from, created a clear distinction between identification and verification, introduced the needs assessment, also who says stop, the ethical principles which underpin the document, the importance of human in the loop in the final decision and finally privacy, governance and accountability.
What are the next steps for the guide?
It is released and is openly available on the BSIA web site. It has also been published on the UK government Home Office web site. A new British Standard is in progress at the British Standards Institute based on the guide BS 9347.
We are keen to learn of more use cases and to build the knowledge base and champion good practice.
Further information

About CoESS
CoESS brings together 23 national associations across Europe, representing 45,000 private security companies with 2 million private security guards.
The mission of CoESS is to support the growth of an industry that delivers solutions of high QUALITY and professionalism. CoESS and its members seek to raise the bar and drive the standards up in terms of employment conditions, training and certification, as well as the use of innovative technology and mission-centric services that protect European citizens, businesses, public spaces and critical infrastructure.
Quality results in higher SAFETY standards in protecting people, assets and infrastructure – and it also requires COMPLIANCE with administrative, social, financial and tax rules. These are the preconditions to maintain TRUST from the authorities, the corporate world and society in general.
CoESS is a strongly committed partner to the EU institutions in bringing added value to European security and social policies. We work for modern industry standards and strengthened cooperation with private operators and law enforcement. We promote high quality services and stand for fair competition between players.

About BSIA
The British Security Industry Association is the trade association for the private security industry in the UK. Its members provide over 70% of UK security products and services and adhere to strict quality standards.
Formed in 1967, the BSIA provides its members with a single representative voice to government leaders and agencies, British, European and world standards bodies, and industry stakeholders including police as well as the industry regulator and inspectorates.
Membership of the BSIA is voluntary, and subject to certain criteria, including adherence to the ISO:9001 quality standard, two years’ trading history and the satisfactory demonstration of sound company finances. As such, choosing a BSIA member company to supply products or services can give security buyers significant peace of mind.
For security companies, membership of the BSIA provides reputable a commercial advantage and places their business at the heart of influencing the future of the security industry. By joining the BSIA, companies become part of a unique group of high quality and professional companies which are well-respected and well-represented to government, end users, specifiers, standards and legislative bodies.
Currently, the BSIA represents 16 sections of membership across the security spectrum, including both manpower and electronic-based areas of specialism.

About Anekanta Consulting
Anekanta®Consulting is an AI innovation and strategic advisory company founded by an experienced security industry technology leader.
Anekanta®Consulting works with international stakeholders across sectors and domains including defence, security and video surveillance, critical infrastructure and smart cities.
We enhance the innovation, product and corporate strategies of our global technology clients through our specialised services. These range from strategy consulting to horizon scanning and M & A research.
We navigate the complex legislative landscape including the EU Artificial Intelligence Act and GDPR requirements for the use of video and AI across horizontal industrial applications. We also lead and influence the ethical and legal use of controversial AI technologies such as automated facial recognition software.
We deliver outstanding AI and technology strategy outcomes which augment strategic decision making at the highest level.
Our teams are built with the customer requirements at the centre of our mission. We involve the best talent to achieve outstanding results ahead of the market. Teams currently operate in small groups from 2-3 people for a project, and upwards of 25 people for a longer deliverable.
Our work is confidential therefore we are unable to share the majority of our customer missions. If you design, develop and operate high-risk AI and sensor technology in security, defence, retail, smart buildings and smart cities please get in touch. High-risk technology is and will be regulated to protect the interests of the individual, Anekanta Consulting can help you to navigate this complex landscape legally and ethically and achieve a competitive commercial benefit.
See the presentation
Acknowledgements
Thanks are extended to the Anekanta Consulting team for their input and expertise; to the BSIA, and CoESS for inviting Anekanta Consulting’s founder to present at the Annual General Assembly.
All images and logos are Copyright their respective owners.