Automated Facial Recognition – is the AI or the human biased?

AI keeps us safe, working in the background in security operations centres. AI gets us through an airport quickly. AI and satellite technology get us to the right location faster

AI saves time when it recommends our movies, AI makes us look smarter when it improves our grammar, AI improves product quality on the production line and AI gets us the right products to our doorsteps in the most efficient way. These are just a few of the many beneficial uses for AI.

If we are safe and content, we don’t think about how and why that happened – it is not an issue until we may be adversely affected. Can AI have a dark side and if so, is it the AI or the human’s shadow?. Not all AIs are seen as acceptable. It depends on the context of their use and the potential impact on the privacy of the individual.

Automated Facial Recognition and AI on trial?

A subset of AI tech hit the headlines when the Police used Automated Facial Recognition (AFR) to assist them in finding people more efficiently. But the technology ended up in the dock as the accused party.

Privacy groups legally challenged its use which revealed that there were some issues but mostly related to the watch lists. Watch lists similar in makeup to those currently used without any assistive technology at all, meaning it simply takes longer for the Police to do their vitally important job. Yet the search for a missing or vulnerable person is time-critical. So what is so bad about the AFR technology, if it improves the speed of search and a life is saved?

The focus needs to be directed at improving and supporting Police policy rather than singling out this very useful technology as a scapegoat for the evils of the world.

In the private sector, AFR enables contact-less entry points if converged with access control data. If used correctly alongside thermal technology, combined with social distancing monitoring, and privately managed contact tracing, AI can add real value to making public areas safe and in turn protect people and the economy.

Without AI, the risk of virus transmission would be much higher because every step in the physical management process would have to involve human intervention. With the best will in the world, masks and uncontrolled or irregular hand sanitising are not sufficient to curb the spread.

How many processes based on old policies and anachronisms get in the way of business and societal improvements today? There is no denying that these problems exist.

AI acts like a mirror, reflecting its creator’s cognitive biases

Many of these issues are reflecting into the AI’s development. AI is only as good as its human creators. However, in some cases, it is acting like a mirror, showing up the cracks in society which are long overdue an overhaul.

The world has changed but new generations have to inherit the idiosyncrasies and poor policies of their predecessors. There is a saying which is often attributed to Albert Einstein, that if you carry on doing the same things, you shouldn’t expect a different result.

Anon

This is why many early HR AIs failed, indeed Amazon withdrew the use of AI in its HR process due to the presence of bias. This is because the AI was fed biased historical data reflecting discrimination in society. The AI just magnified the bias and didn’t change anything at all. Humans have to do that.

Without deep review of the underlying processes and policies, AI can make bad societal and business practice more efficiently bad. Do we want that? – the answer is obviously no unless you have a stake in the badness perhaps.

Technology is not beyond redemption

There is no suggestion that technology is beyond redemption, AI can become biased if it is fed biased data – just like humans. A balanced view should always be sought and the only way to achieve that is for there to be diverse inputs and unbiased and ethical oversight. This is not dissimilar to how good governance should work in business.

In the high-risk COVID environment, reducing physical interaction is paramount. The human increases the risk. And in business, the human interactions which are often required to join dysfunctional processes together add little value and in fact, can also introduce risk and error.

When the use case is properly defined, AI and automation can make data handling more efficient so the human focus can be on the speed, value and accuracy of the decision rather than repetition and finding patterns in big data sets. As people, we are not very good at either of the latter. If we were, you could argue that COVID would not have spread so effectively. At some point, everyone forgets that the invisible threat may still be all around us.

AI works on a global scale

On a global scale, AI is now helping in the search for effective and targeted treatment by analysing multiple data sources. In security, the data sources include video and satellite data. When so many apparently random activities happen at once, AI can derive meaning from this vast array to provide real-time situational awareness of any evolving threat including the progress of COVID.

These use case scenarios for AI may seem hard to reach if you are in the business environment focused on the delivery of the result to the shareholders without detriment to the stakeholders – AI is just for big data companies, governments and authorities – right?

You don’t need to be a big data company to benefit from AI

AI, automation and digitalisation are at the top of the Board agenda for many businesses, especially now with so many remote workers. Businesses are implementing AI to solve internal process problems, and for many industries, AI is solving widespread problems when designed into standard products.

Any potential use of AI should be thoroughly assessed to ensure that there is no negative impact on the individual and that its deployment is ethical.


We wrote a framework for the ethical use of AI in business. Time for a health check?

We can help you with this. We will assess the need and explain the AIs which may or may not be appropriate to move your business forward into its future state.

For further information about AFR and the steps being taken to create guidance for its ethical used see the BSIA Webinar available in the Anekanta Consulting media portfolio.

We all need to do things differently now.

Contact us at ask@anekanta.co.uk or fill out our contact form here

We'd love to hear your views.....

This site uses Akismet to reduce spam. Learn how your comment data is processed.