If you are not certain whether you are implementing AI based technologies safely and ethically, then why are you doing it?
This is one of the questions we seek to help you answer through the work we do with the global organisations who are developing standards for high-risk AI applications.
One of these standards will provide guidelines for the ethical use of facial recognition software in video surveillance systems.
Built on a ground-breaking guide developed by the BSIA members, this new standard which is now in progress in the British Standards Institute will continue the good work of the BSIA. The standard will provide an auditable ethical operational framework which guides the use of this sometimes controversial technology.
We want there to be a clear step-process which can result in a “NO GO” decision in high-risk applications if all ethical parameters are not met.
And to show a clear path which guides the safe and ethical use in low risk applications.
We seek to end the confusion around ethical use through our work with the standards committee, and in doing so play a key role, alongside industry peers, in what will be one of the first ethical use standards of its kind in the world.
Our founder, also a member of the BSI Committee, was invited to present the status of the guide by the BSIA (Chair of the BSI Committee developing the standard) at the May 22 BSI Spring Standards Conference.