AI Transparency. Is it Important?

AI transparency in the physical security domain

This article was first published in Fire and Security Matters Magazine July 2020: https://www.fsmatters.com/Artificial-Intelligence-The-Transparency-Challenge

Why should we challenge the transparency of AI?

UPDATED 10th March 2023.

In the two and a half years since we published this opinion piece, ChatGPT emerged in November 2022, a sub-set of the more powerful davinci model.

We use ChatGPT and variants in Anekanta Consulting and find it a useful helper in our research and advisory business. Our early thoughts can be found in our article where we find some hype but alongside that, when in trained intelligent hands, adds value when used in the right way.

In 2020 we discussed the predecessor to ChatGPT and the subject of transparency. The technology behind it is Generative AI. This creates new content. ChatGPT is an order of magnitude more convincing leading to the concerns we expressed becoming a reality. Deep fakes, impersonation, cyber hacks, social engineering and so on.

We were ahead of the market at the time, however, the technology has matured and is becoming widely accessible in a range of applications including security.

The critical issues which underpin trust in AI are in sharper focus for organisations who wish to operate ethically and within the law and emerging regulations; governance and accountability, privacy, bias, explainability, transparency, cyber security.

ORIGINAL TEXT JULY 2020

A powerful new general purpose AI technology has emerged from the computer science community. GPT 3.0 contains over 1000 times more learning reference points than its predecessor, resulting in a quantum leap in capability.

The beneficial application this new AI engine could improve the analysis of millions of lines of text in legal documents, medical documents and other academic papers in order to extract meaning and in doing so accelerate human decision making. This could lead to a faster route to a cure for infectious diseases such as COVID, or help to recommend patient specific treatments.

Natural language processing (NLP) may seem fairly straight forward when compared with the AIs needed in the security domain. The structure of language is generally well defined so “all” the algorithm has to do is interpret the form of the characters (using convolutional neural networks/image processing) and assimilate words assuming the data is already in a readable format. It is the context which presents the greatest challenge to any neural network. (NLP uses recurrent neural networks (RNN)). Every time you use predictive text on your phone you are interacting with a form of NLP. But does it always get it right? Usually you have to decide which option is most appropriate. The human, knowing the intended meaning, makes the final decision.

With a brief from a human, GPT 3.0 may be able to create factual text which is indiscernible from human writings. This may alarm those who enjoy spending hours assimilating research data. In stark contrast, replicating the human traits of ideation and creative flair caused by the unique way we each experience life, is thought to be beyond the capability of such technology.

AI terminology may sound complex to those unfamiliar with the nuances of computer science, nevertheless, business leaders are still accountable for the ethical application of AI.

Beneficial AI in the physical security domain

In security, things are not so “easy” for the AI as they seem with NLP. This is due to the inconsistent and changing nature of the environments, the random activities of malevolent actors and the multitude of available data sources from which meaning may be extracted.

The smart city, smart building and smart public place all generate billions of bytes of data which can be analysed automatically by AI. Autonomous IoT building sensor technologies are proliferating, and although legacy systems create drag in the convergence process there is an increasing ability to convert protocols and transcode unstructured data resulting in a harmonised data pool suitable for AI analysis. After all this manipulation for accessibility, it is quite possible that the AIs may be analysing data which is no longer in its original format.

What is transparency?

The purpose of AI in the security domain is simple. It is to enable better human decision making resulting in fast and relevant action to protect people, protect property and reduce loss.

Transparency is one of the building blocks of trust. In the context of security decisions, we need to understand and trust how the AI reached its recommendation. Adoption then follows trust. In a constantly moving environment it is somewhat difficult to test the replication of an AI decision when detecting a one off threat. Where did the AI’s training data come from in this case?

The use of AI must be tested in context because scenarios which could not be thought of by the developers due to narrow experience, may arise which are new to the AI. This may seem obvious, but you wouldn’t drop an untrained person into a security and risk environment without contextual and professional training.

On the subject of training, for video analytics to work effectively across a range of scenarios, it has to learn from millions of classified images gathered from many sources. People are invited to contribute to open source projects by looking at images describing what is seen. Emerging AIs can now learn from their own learnings which means that they can recognise familiar objects in a changing scene over time. This new technology reduces the need to label every instance of a type of object in an image and also reduces the anomalies which creep into AI if the background is inadvertently classified too. For example, if a boat and a blue sky become associated because all classified images of boats also have a blue sky in the background, only a boat with a blue sky will be recognised. Boats without blue sky will not be recognised as boats. Some AIs are really that raw and fundamentally untrained.

It is important to understand the distinction between the AI engine and what is does, and the context of the application, in essence, what it is being used for.

If leaders were not using AI we would define the goal and create a process journey from start to finish, usually with traceability throughout.

Transparency is one of the fundamental elements of supply chain management. Component traceability in electronics manufacture is vital for quality control and warranty. Every piece of data is tagged in some way to identify it and its source. (Blockchain is emerging in supply chain verification but does not provide the answer in the security intelligence environment due to the risk of false data being introduced which then becomes the “truth”.)

If you can trace the components in a manufactured electronic device back to their source, it is logical that an AI decision should also be traceable back to the original data sources. It should therefore be possible to determine whether the source data, which may be the prima facie evidence in a legal case, has been altered. This may be beyond current technology, however, when a data problem is clearly defined, a solution is usually found.

It could be argued that generic open source AIs which are released to the world for experimentation, become prone to bias due to the fact that the specific applications are not defined and therefore the training data sources are uncontrolled.

This has happened in practice recently when automatic facial recognition technology was released into law enforcement use in the USA. It was discovered that the AI had been developed using a narrow demographic and started to show inaccuracy and bias in its recommendations. This can be corrected by widening the data set. If it had not been highlighted through this use case, some of the endemic biases in the organisations may have been left unchallenged too. AI may help to remove bias.

Why are transparency and explainability so important in the physical security industry?

The industry generates more data than it currently utilises. Even so, AI is focused around generating data about the data. This is because the unstructured data produced by video surveillance cameras is impossible to analyse alongside other relevant data sources without some kind of change of format from the original. Integrations which store alarm data in the image header aid the text search for video associated with an alarm but do not tell the analyst what appeared in the video. Every image has to be reviewed by a human. This is time consuming and inefficient.

AIs in the form of video analytics (convolutional neural networks) provide effective filter and search functions for video data. The search results can trigger the start point for another analytical AI process such as picking up a related social media tweet. Sadly, not only do those whom mean harm tweet their intent, members of the public seem to believe that tweeting about a serious incident may be a good enough substitute for calling the emergency services (the socially responsible thing to do). What is buzzing on the social media networks could be highly relevant to an incident captured on a video surveillance camera.

The source data and the resulting decision can become far removed and much harder to trace back to the original. The digital evidence bag may contain multiple data formats gathered from disparate sources and may be full of untraceable data, potentially reducing its value in the criminal justice system.

Verifying the validity of digital forensic evidence is becoming an ever-increasing problem, even more so now than it was in the 2000s when digital recording took off as a means of storing video.

Without challenging the roots of the AIs embedded in security systems, we may inadvertently propagate flaws right through to the final decisions. There is a school of thought in leadership which suggests that any decision is better than no decision. But when the decision is about life safety, security and respect for the privacy and dignity of the individual, and is based on a biased process it is worse than no decision due to the potential harm it may cause.

Decisions based on bias still occur every day in society, but with the increased level of awareness, now is a good time to scrutinise the AIs used in security decision making and ensure that there is a continuing education and a drive to make things better.

To challenge AI, leaders have to gain a deeper understanding of how these technologies work. This is not a matter for the data science department in isolation, it is a Board level consideration.

Robust measures need to be put in place to enable traceability back to source to verify whether the data has been altered along the way. It is only a matter of time before a deep fake video or audio file carries weight in a prosecution. Let’s ensure that AI does not magnify injustice and instead propagates fairness.

We need AI which is free from biased training data. We need AI whose decisions can be replicated and the type and use of AI to be understood and transparent. Where the source data and the decision become dissociated in some way lets be mindful of this and apply AI in a way which allows the source data to be identified and verified.

Most of all AI should be applied to achieve the maximum benefit, ethically, within the legal, regulatory and standards frameworks and without causing harm. This is why we need to challenge the transparency of AI.

Would you like to understand more about AI and how it can benefit society, industry and solve your business problem?


To find out more about Anekanta Consulting’s work with users of AI based technologies visit our Services section.

Contact us at ask@anekanta.co.uk or fil out our contact form here

Latest news and insights

  • Anekanta® recognised for ethical and responsible AI development and implementation
    We are proud to announce the release of a ground breaking new British Standard “Facial recognition technology – Ethical use and deployment in video surveillance-based systems – Code of practice (BS 9347:2024)” Anekanta® Privacy Impact Risk Assessment System™, de-risks FRT use cases anywhere in the world and recommends the new standard for UK mitigation purposes.
  • Examining the ‘cost-benefit’ of adopting Responsible AI within your business
    By adopting Responsible AI practice, businesses can unlock the transformative power of AI technology while minimizing risks and maintaining stakeholder trust. Anekanta® recommends that organisations start their AI policy and AI Management System Implementation (BS ISO/IEC 42001) alongside legal (EU AI Act) and regulatory considerations dependent on the regions where the business operates or uses AI decisions.
  • What does the new legal definition of AI systems mean in practice for commercial use cases?
    Organisations may not be certain whether their systems are AI due to the range of contrasting views from vendors and buyers. It is important that they recognise and categorise these systems as soon as possible. Developers, providers and user/deployers of AI systems which meet the EU AI Act definition criteria, and whose decisions are utilised within the EU, have to comply with the law.
  • Anekanta® Responsible AI frameworks featured by the OECD.AI Policy Observatory
    Anekanta® Responsible AI frameworks have been featured in the UK Government Resonsible Technology Adoption Unit (formerly CDEI) Portfolio of AI Assurance Techniques, and now we are announcing that we are published in the OECD.AI Catalogue of Tools and Metrics for Trustworthy AI. Find out more.
  • EU AI Act: Cleared to move to the next stage. Your action list to get started.
    Organisations are urged to review their development road maps and current use cases to determine whether their AI systems sit in the prohibited and high risk categories

1 comment

We'd love to hear your views.....

This site uses Akismet to reduce spam. Learn how your comment data is processed.