Better decisions with AI in the Physical Security Domain

This article was first published in Fire and Security Matters Magazine July 2020:

The term Artificial Intelligence (AI) in the physical security domain is controversial and views are polarised.

Some physical security hardware manufacturers dismiss AI as a technology for the back office or deny its potential altogether, in contrast, the cutting edge developers create better, faster ways of helping humans to solve security risk problems using the maximum technology innovations.

Denying this branch of computer science its rightful place in the security technology glossary can confuse the customer and lead to a poor technology fit.

AI analyses the data generated from security systems and operations. Whether AI leads to better decisions depends on the type of AI used, the quality and quantity of the source data and the human process defined to manage the outputs.

AI originated from the early computer science pioneers in the UK over half a century ago who felt that it would only be a matter of time before computers could masquerade as humans. Citing Alan Turing and his famous Turing Test; for a computer to pass the test, it would be impossible to determine whether a human or a computer answered a series of questions. However, the Turing Test was only challenging one type of human intelligence.

Intelligence is far more complex.

The type of AI which attempts to emulate human intelligence is known as artificial general intelligence, also referred to as the singularity. AI which thinks for itself. This is the idea that the computer or robot will possess a facsimile of human intelligence in all its forms from IQ to Emotional Intelligence, to the Gardner’s theory of multiple intelligences, right through to human consciousness.

For now, let’s take the idea off the table that robots will take over the world just yet, or “Hal 9000” the super computer in Space Odyssey 2001 will start making decisions about the fate of its human masters. The reality is that we don’t understand human intelligence and even less so consciousness.

The academic and scientific community are divided on this subject. Some say never, others suggest it will be another 50 years before we come even close.

We need to get our feet back on the ground and take a look at the real world problems being solved by this controversial set of technologies.

In the physical security domain, artificial narrow intelligence prevails, which is data driven AI designed to solve tightly defined problems. All AI technologies used in the security industry have to be carefully set up and trained. The set up and training time is significantly reducing as the AIs become more sophisticated due to enhanced development and exposure to more and relevant training data.

AI has been evolving over the last couple of decades and forms of supervised learning embodied in simple conditional alarm configurations, have already been in use for some time. The industry is inclined to forget this, considering it to be “old tech”. However, the problems they solve are not old, they are growing and becoming more sophisticated by the day. AI has evolved, and the Boards, the specifiers, manufacturers, integrators and users who make decisions about the ethical implementation of this technology need to catch up.

Who would have thought that the physical security domain would be at the frontier of pioneering technology. It has historically attracted a low tech image having been associated with the physical work associated with the human intervention stage of security and safety management. But before that happens a raft of technologies are in use to enable human judgements about what is happening and as a result, action is taken with precision.

The physical security industry provides an excellent example of people and technology working together solve problems to achieve the best outcomes.

The development of object detection and automatic and live facial recognition (AFR/LFR) for security purposes, has been recently been cited as commanding over $7 Billion, nearly 10% of the global AI investment budget. This is a hot space which has grabbed the attention of innovators and investors.

Security and video surveillance technologies have matured, and networks, cloud processing and storage have increased. What were previously siloed operations due to discrete data sources, are now emerging as securely accessible and capable of being converged. Now a bigger picture can be built from multiple disparate data sources and in doing so, provide a multitude of perspectives.

Specialised autonomous hardware devices are generating masses of data every day; building technology sensors, video surveillance cameras, alarm sensors, PIR sensors, physical contacts and access control events, lone worker alarms, people counters, heat mapping devices, GIS and GPS data, health check and diagnostics data, the list continues.

The data generated from the management of people, buildings and security systems is almost unlimited.

This vast ocean of data tells a story about what is happening at a point in time. But the means by which the human operator can correlate and interpret the data in the context is limited by the AI tools the developers of the systems provide. Currently these are fairly primitive.

The most sophisticated form of AI is the deep learning neural network applied to video data sources. In the security industry, convolutional neural networks for image processing, are used extensively in Automated Facial Recognition/Live Facial Recognition (AFR/LFR) and object detection. AFR/LFR algorithms solve very specific problems of authentication of authorised persons, or unauthorised persons. Object detection determines the presence of people and objects with defined characteristics such as a car type, adult or child, clothing type and colour. But these technologies are analysing the video sources in isolation.

A combination of AI technologies could correlate the outputs of a number of siloed AIs and paint a bigger picture. This is happening gradually but in most cases it is the human who makes the choice to link the relevance of events. These choices can be displayed in the same interface which is a big step forward towards putting AI to best use.

The quality and format of the data sources is the Achilles heel of achieving highly reliable outputs.

Another subject attracting much controversy in recent times is the prevalence of biased data, due to the developers of global AIs operating within an echo chamber.

An unrepresentative training data set can lead to a biased algorithm, rendering the results untrustworthy when applied to a wider real world scenario. However high end providers have invested heavily into balancing these issues and the better algorithms are cited in the National Institute of Science and Technology reports to be over 99% accurate across a varied race and gender demographic. This is encouraging.

In a risk situation, there is no training data if the AI is looking across multiple data sources and correlating what may appear to be unconnected events. This is why many AIs need a bedding in period to learn the environment. This is normal. Only the bleeding edge AIs can start producing reliable results with minimal labelled data, but their descriptive language is at times beyond comprehension making it very difficult for the buyer to understand what the AI actually does.

What could be described as a resulting ensemble of AIs could add more value than the individual data source queries. This is probably the most challenging sector for AI but when successfully implemented brings the most rewarding results.

The best use case scenario may be the prevention of a terrorist incident due to the correlation of relevant data across multiple data sources. e.g. a blue car parks in an unusual place on Bond Street at 12pm, its type and licence plate are recognised by the system and an event created, aggregated social media feed analysis picks up boastful and threatening hate language and an event is created, a known terrorist wearing a red jumper walks down Bond Street at the same time and an event is created, an organised protest is underway resulting in a congregation of hundreds of people at the same time which is an area requiring increased surveillance. Threats could come from anywhere. Due to the combined used of a number of types of AI, the location and possible time of an attack could be averted by the timely deployment of security officers to disperse the crowd and with Police coordination, the suspect detained for questioning and a potential attack averted.

Typical physical security and counter terrorism challenge

Without AI, correlating these unconnected events would be a slow manual task.

With AI, the decision makers can take action faster and with more precision.

In conclusion, AI in the physical security setting is not a mystery, it is not designed to replace the human, it is designed to augment the human’s ability to make accurate decisions quickly to avert disaster. There is now simply too much data available for a human alone to analyse. It is time to embrace this technology set. In doing so new opportunities open up as problems which couldn’t be solved, such as making the COVID world safer, can be be considered, contributing to the safe recovery of the economy without a high human cost.

To find out how Anekanta Consulting can help you solve data convergence problems and make better decisions, faster, contact us at or fill out our contact form here.

1 comment

We'd love to hear your views.....

This site uses Akismet to reduce spam. Learn how your comment data is processed.