Join us in Atlanta on April 10th and explore the landscape of security workforce. We will explore the vision, benefits, and use cases of AI for security teams. Request an invite here.
(Reuters) — Britain’s most senior police officer on Monday called on the government to create a legal framework for police use of new technologies, such as artificial intelligence.
Speaking about live facial recognition, which police in London started using in January, London police chief Cressida Dick said she welcomed the government’s 2019 pledge to create a legal framework for the police use of new technology involving AI, biometrics, DNA, and other elements.
“The best way to ensure that the police use new and emerging tech in a way that has the country’s support is for the government to bring in an enabling legislative framework that is debated through Parliament, consulted on in public, and which will outline the boundaries for how the police should or should not use tech,” Dick said.
“Give us the law and we’ll work within it,” she added.
Dick rejected evidence that facial recognition algorithms are racially discriminatory in that their accuracy rates vary depending on the skin color of the person they detect.
“We know there are some cheap algorithms that do have ethnic bias but, as I’ve said, ours doesn’t, and currently the only bias in it is that it shows it is slightly harder to identify a wanted woman than a wanted man,” she said.
The London police’s facial recognition technology is provided by NEC, a Japanese company.
(Reporting by Elizabeth Howcroft, editing by Guy Faulconbridge.)