Video Authenticity in an AI Centric World

Video authenticity

This article was first published in Fire and Security Matters Magazine May 2020: Security Matters and Fire Safety Matters Magazine and web site.

Writing exclusively for Security Matters, Anekanta Consulting’s Founder Pauline Norstrom explains why it has never been more important for stakeholders to remain properly informed about this ever-evolving set of technologies.

A day barely goes by without AI and video surveillance somewhere in the public spotlight due to concerns about ethics and privacy.

And even more so now due to the rapid development of highly beneficial COVID focused applications designed for safety purposes. Policymakers have to run to keep up with the pace yet it has never been more important for stakeholders to stay properly informed about this ever-evolving set of technologies.

The search for solutions to the new issues created by the COVID crisis has been catalysed by AI which helps analysts, who may be otherwise overwhelmed with data, to understand in real-time, the impact of constant policy change, different international developments and the effect on people, business and the economy.

AI is being used effectively to predict future scenarios and improve preparedness

The latest developments in contact tracing are built using AI to assess the risk of infection to an exposed individual and other AIs are automatically tracking and informing of NHS resource availability in response to new cases. In bio-science, over 100,000 academic papers were made available in the CORD19 project to be automatically analysed using natural language processing and other AIs to find similarities in other viruses and use the data to create model outcomes and learnings from existing drugs.

AI is already used effectively in the Healthcare and Government sectors and it is also rapidly growing in popularity in the Physical Security Industry to create meaning from data generated from the proliferation of cameras and other sensors.

AI has the potential to solve all sorts of problems, but every leading light has a dark side

Bad actors may try to use it to alter the perception of reality. Although we have not seen fake videos successfully pass off as valid evidence in court – yet – the need to verify a surveillance video’s authenticity and ensure AI explainability are inexorably linked.

The video surveillance domain is a massive generator of content-rich data from multiple sectors including retail, logistics, education, sport and infrastructure. It is nearly impossible to search video retrospectively to determine what happened after a complex event or to predict whether a situation may arise without AI. When used with cloud technology bolstered by strong cyber-security measures, ever-improving communications and dedicated processing, AI is accessible to a much wider user base. Furthermore, processing does not have to happen at the edge or near side – it can happen anywhere. No longer does AI sit in the corner of the development labs, it is being applied in the real world to solve big data information overload.

When technologies are distributed, it is important not to lose sight of traceability back to the source

The broadcast and media industries embedded digital rights management technology to prevent copyright theft. But in contrast, the video surveillance industry has a steep hill to climb to apply authentication technologies real-time, which increase the processing and power overhead and slow the systems down. This is why generally if check-sums are applied at all to verify authenticity this happens at the point of export.

Nothing can halt progress as these technologies come of age. This is evidenced by massive investment into AI in the security industry for facial recognition and object detection. According to the 2019 Stanford AI Index, this topped $7B in 2019, nearly 10% of the total global investment into AI. AI is proving itself in live monitoring and retrospective search of video recorded within discrete systems. However, to create video storage warehouses which span across geographies and industries, and which are suitable for mass automatic analysis, the video has to be accessed through secure APIs and harmonised to a common format. This may mean that the original data structure could be lost. Analysis through a neural network produces new data about the video which can be correlated with other data sets to build in the power to predict events. The AI brings great benefits by increasing accuracy and reducing human processing time, but in creating meaningful analysis, the outputs may become obfuscated from the original data source potentially resulting in a challenge later if authenticity, explainability, algorithmic bias and ethical use are called into question.

AI is going through a renaissance since powerful dedicated GPUs and advanced open source and crowd-sourced coding frameworks are becoming widely accessible to developers.

The dialogue around trust is maturing

Forward-thinking businesses, driven by market forces, see the benefit of starting to scope, develop and introduce AI into their products and business process. And the demand for the use of AI in the security and safety realm is growing rapidly as governments look to the private sector to help them come up with policies and solutions which enable the permanent safe return of people to public spaces.

It is therefore expected that the use of professional video surveillance cameras to monitor publicly accessible spaces will continue to grow. Further supported by thermal cameras, automatic facial recognition, occupancy counters and social distancing detection all of which continually monitor the physical work space and create alerts if breaches in safety policies occur.

When AI is applied for a purpose other than security, this opens a much wider application to the safety aspect of crowded spaces. This new use case scenario assumes that people are a risk to each other just by going about their normal daily duties and this could be perceived as a wider and more immediate threat than the lone wolf terrorist. Not that the threat of terrorism is any lower than it was, but there is a new terrorist on the street, the hidden bio-threat.

The increase in the application of AIs to analyse video and other associated data may result in a pressing need to re-examine whether video evidence is authentic. This is evidence which is most likely to be used to manage health and safety and ultimately insurance claims, leading to wider public scrutiny if employees become infected and seriously ill after attending a protected space.

Authenticity in the video surveillance domain hit a wall in the early 2000’s when recording transitioned from tape-based media to computer hard drives.

This was a hot topic sparking much debate around the admissibility of digital evidence in Court

Because digital recordings could be easily altered without detection, the Industry provided a means by which video authenticity could be verified, especially when the camera was the only witness. In response, the British Security Industry Association created a code of practice which later became a British standard (BS8495), working hand in hand with the Criminal Justice System and the UK Government Home Office. The standard specified proof of authenticity through the application of checksums to exported data, encouraged robust audit trails and physical security of the devices to the point at which video evidence entered Police custody.

When systems became networked and the user could choose whether to capture and record in-camera, on-site or remotely. Images required for evidence could be downloaded anywhere an authorised user had access. Now images can be captured and may be temporarily stored on-site then trans-coded and stored in a different format to the original. AI engines create near real-time analysis, substantially enhancing the value of the system to the stakeholders and in many cases saving lives as a result. The benefits outweigh the risks.

The perceived issues surrounding video authentication today seem moot. But as credible, reliable AIs continue to emerge, fuelled by the rapidly changing need to solve the pandemic problems, this matter will be back in the spotlight because as technology becomes more sophisticated, so does the sophistication of those who intend harm or mischief to others. The openness of the internet and increasing communications networks enable fast efficient access to servers around the world.

Never has it been more important to ensure that systems are managed effectively to the highest possible standards

Professional video surveillance in the UK is strictly controlled by the Professional Security Industry, held together by several very strict voluntary standards and certifications and a Government code of practice which is focused on citizen privacy rather than keeping the systems secure and useful for forensic purposes. Video evidence can be compelling and can seal a conviction in a criminal case if proven to be authentic. Furthermore, a successful guilty plea in a major case can save the Criminal Justice System millions in reduced legal costs, cementing its value to the public when properly managed.

But following the success of deep fakes created during the 2019 UK election campaign, the infamous example being the video created by Future Advocacy showing Boris Johnson and Jeremy Corbyn endorsing each other, demonstrated how easily video can be manipulated.

How can we trust what we are seeing?

As native video surveillance data is harmonised into standard formats, and this video is then used as evidence, it is much easier to create a deep fake without detection. We may see a resurgence of the legal debate around whether a digital image has or could have been altered.

Modern surveillance providers offer cloud and managed services which open up the ability to utilise AI. These systems are strictly controlled. However, the standards and audit trail frameworks need to catch up to ensure that bad actors are not successful in altering vital video evidence or creating a mass illusion to distract security analysts from a real incident.

Video authentication in an AI-centric world needs to make it back onto the strategic agenda, and in the meantime, it is the Professional Security Industry once again which successfully self-regulates, creating robust audit trails and explainable AI supported by super secure bank-grade cybersecurity measures to bring the best of these technologies to market for the greater good of society.

To find out how Anekanta Consulting can help you with policy making, horizon scanning and strategy, contact us at ask@anekanta.co.uk or fill out our contact form here.

We'd love to hear your views.....

This site uses Akismet to reduce spam. Learn how your comment data is processed.