Beneficial AI based technology reveals systemic bias, leading to stronger laws and policies

It has always been our belief that the use of AI may challenge long standing practices and even help to remove bias whether it’s overt or less so when based on societal anachronisms.

Looking through the lens at the application of AI in defence and security, AI has been applied in the professional security domain for several decades to enable security operators to make fast, well informed decisions to mitigate security threats.

In recent times the development of deep learning neural networks, the power of cloud computing, faster more reliable comms and IoT sensing technologies are generating an avalanche of vitally important security data which was not previously available. AI, when put to good use, can be used to automatically correlate and analyse the mass of data, and help the professionals to act faster to foil criminal activity, respond to evolving threats and make people and places safe.

So why does the use of AI and specifically automatic facial recognition by law enforcement agencies hit the headlines, when in contrast their use by the professional security industry is generally seen as useful, safe and uncontroversial?

It didn’t help matters when global technology players rushed to the law enforcement market, perhaps in favour of maximising shareholder value. For example IBM, Amazon and Microsoft admitted their failure to produce bias free automatic facial recognition software and have withdrawn its sale to law enforcement agencies. The tech giants put biased software into the hands of potentially biased users which magnified the problems if AI is irresponsibly released without strong safeguards for its use in high risk applications. This is one of the worst case scenarios.

But the blunt instrument used to remove bias in the AI, was to eradicate the evidence of the bias by removing the technology which highlighted it.

AI is helping to focus the spotlight on societal and governance problems such as the lack of diversity in the development processes and gaps in legislation and ethical frameworks.

Effective, ethical deployment of AI can only happen if both the technology and the implementations are trusted due to robust implementation frameworks being in place.

In the professional security industry, AI is already trusted by users in multiple sectors to do the job within current legal frameworks. The professional security industry is governed through the highest standards, ethics and legislation, which consider privacy impact assessments, proportionality, ownership and accountability.

In the public sector, these frameworks need to go further so that the changing face of innovation can be put to best beneficial use. The recent UK, South Wales Police ruling is case in point.

If AI technologies are to be used by law enforcement agencies, legal frameworks need to accommodate high risk applications transparently, adhere to high ethical standards and be subject to independent oversight.

Anekanta Consulting created a guide to the ethical implementation of AI in business process.

The principles apply across other domains too. At the root of the guide sit the foundational points which were also revealed as weaknesses in the South Wales Police implementation of automatic facial recognition in public places.

1) Undertake thorough due diligence on the source data which is subject to the AI analysis. Determine whether it is legal to utilise the data in the application placing the primary focus on the privacy and human rights of the individual.
2) Ensure you have challenged and tested the AI for bias and if bias is discovered, provide a mechanism for feedback to the developer in order to enable them to correct the issues in the training data. Do not use biased AI until the issues have been corrected and valid tests show that.
3) Ensure that the implementation of AI is overseen at the highest level in the organisation and that there is clearly documented accountability for its impact resulting in a means of instituting change through those responsible.
We have summarised three foundational points which apply to the implementation of AI in almost any scenario,

In the South Wales Police case, the technology itself was not found to be at fault, it was the source data (watch lists) and the policies currently in place to ensure compliance with the relevant privacy legislation which needed to be strengthened. The legal test was welcomed by the Police and has provided clarity around the UK Laws governing the use and make up of watch lists and the need to test AI for bias.

AI is everywhere and when used ethically within the Law and the relevant policies are in place, it can potentially help to reduce bias, highlight business and public sector process problems and ultimately improve the lives of many individuals.

For the full guide to the ethical implementation of AI, please click on the box below. We would welcome a discussion about your needs, send us a brief outline by clicking here

We'd love to hear your views.....

This site uses Akismet to reduce spam. Learn how your comment data is processed.