This is an interesting story for the security community. Mcafee proved that they could automatically alter the characteristics of a face beyond recognition, and using a generative adversarial neural network, a form of AI, fooled the AFR into identifying someone else. It would take a very determined and technically sophisticated criminal to intercept and alter the images captured in an airport before the AFR performs its task. However, just because it is difficult does not mean it is impossible. Some learnings in here as new ways of faking evidence emerge every day. Is it time for the Board to review cybersecurity, data management, transparency, ethics and human oversight….?
The full story can be found here: https://www.technologyreview.com/2020/08/05/1006008/ai-face-recognition-hack-misidentifies-person/