Anekanta Consulting’s perspective on RUSI.org’s latest policy considerations on AI and UK National Security

Media portfolio

27th April 2020 Rusi.org released its GCHQ commissioned policy considerations on the benefits and use of AI for National Security in the UK. Here, Anekanta Consulting provides its initial opinion and unique perspective on the findings and conclusions in the report.

On reading one may conclude that AI has great potential and is probably here to stay when used for good within legal and ethical frameworks to enhance human decision making. The further development of AI will have to be accelerated due to the increasing sophistication of bad actors using AI for illegal purposes.

Anekanta Consulting draws a parallel with Karl Jung’s philosophies where he describes the shadow as being the unknown and hidden characteristics of personality generally associated with the “dark side”. Those whom embrace this philosophy are better able to counter dark aspects rather than project them on those around them and in doing so stick their heads in the sand. This strategy can run the risk of the person being caught off guard, unprepared to respond to what probably exists although hidden from view.

It is encouraging to see that there is a broad acceptance of AI and its derivatives and they are considered to be useful tools in the detection and prevention of many kinds of crime albeit with many caveats quite rightly referring to the protection of the freedom and privacy of the individual.

It is also acknowledged that gaining a fast and accurate understanding of an emerging situation using all available data sources is currently very difficult for humans. It is information overload.

AI currently lends itself well to the analysis of siloed data sets to solve defined problems and in doing so assisting the humans in making decisions. However, one problem appears to be left unaddressed by AI and that is the idea that an emerging threat cannot be readily predicted.

This appears to be the current view because there is insufficient training data available from the low number of extreme attacks. Deep learning networks need a large amount of training data in order to make meaningful inferences.

By nature, random events are exactly that, they cannot be predicted. They sit in the category of the ambiguous threat. Something bad may happen but what, when and where?.

There are technologies emerging from the AI community which draw meaning from multiple disparate data types and sources. Meaning which could prioritise the attention and focus of security operators to one area over another.

We feel that there is a strong parallel between AIs which predict cyber threats and the challenge associated with predicting physical security threats.

Mass infrastucture related cyber terrorists seek to commission an attack surface into the fold by targeting inscure devices, often unmonitored autonomous IoT products and automatically plant their code which is later activated to perform a disruptive activity such as a distributed denial of service attack. This behaviour is not dissimilar to the planning of terror attacks and other seemingly random acts of disruption to society.

But these bad actors seldom work alone. Clues leading to the master plan sit in publicly available data sets, blogs and forums which contain chatter and speculation. Clues sit in unusual activities captured by video surveillance cameras around commercial and government premises and in other data sets. Everyone leaves a digital trail, groups leave a larger footprint. It is not suggested that individual suspect tracking using AI is either legal ethical or even possible. However, if as a result of using AI humans are better able to make the decisions needed to place increased surveillance around a risk area at a given point in time, this may serve to substantially reduce the vulnerability of the target.

AIs are emerging which may be described as “Super AIs” which are able to draw meaning from the outputs of siloed AIs focused on specific tasks such as automatic number place recognition and other forms of image analysis, video surveillance data, aggregated social media sources, GIS information systems, satellite data, and more. These AIs sit in discrete areas of the commercial world and they are on the bench now waiting to be further developed for the cause.

For further information and to discuss the AIs currently developing in Anekanta Consulting’s technology incubator, please contact ask@anekanta.co.uk.

Rusi.org published their report on their web site and it is available to download: https://rusi.org/publication/occasional-papers/artificial-intelligence-and-uk-national-security-policy-considerations

We'd love to hear your views.....

This site uses Akismet to reduce spam. Learn how your comment data is processed.