Police forces across the UK are increasingly using artificial intelligence (AI) to help keep their communities safe and secure. AI offers the potential to reduce resourcing pressures, improve public safety and provide more consistent results.
However, the use of AI also comes with a number of challenges. Police forces must be clear about their reasons for using AI and there is an onus on them to use AI openly and accountably. There are also concerns around how AI systems collect and manage data, which could pose a threat to the public’s rights or discriminate against certain groups.
To stay on top of it all, police forces will need to learn from each other and keep an open dialogue between teams, while at the same time taking note of other sectors that are further along the road in their AI journey. That way, they’ll be able to get the best results out of this new tech, while making sure it isn’t compromising any ethical standards.
AI trials are on the rise
A number of police forces across the UK have trialled “predictive policing” tools, which use algorithms and historic data to predict where certain types of crime are likely to happen.
A small number of police forces have even started using tools which allow them to predict whether a known individual is likely to exhibit certain behaviours or characteristics in the future.
There’s also been trials around facial recognition technology to identify people automatically from live video footage (such as CCTV) – the Metropolitan Police Service and South Wales Police have trialled this technology in a number of areas, including at large events (1).
However, a ruling by the Court of Appeal back in August 2020 reminds us of that this technology isn’t full proof. It ruled that the use of live facial recognition by South Wales Police was unlawful, saying that the police service had not gone far enough to check that the technology didn’t exhibit gender or racial biases (1).
The pitfalls of biases, privacy and discrimination
This ruling echoed the concerns that facial recognition algorithms may disproportionately misidentify or fail to identify certain groups. Some evidence shows differences in the accuracy of certain facial recognition algorithms depending on the subject’s ethnicity or gender.
Campaign groups and academics have called for greater ethical scrutiny of the use of facial recognition, as they’re concerned about its accuracy and infringement on rights to privacy and consent.
In addition, there’s a risk that police algorithms may direct officers to patrol areas that are already disproportionately over-policed.
Ethical questions around AI use
It is because of issues that police forces must be clear about their intent in using AI or new technology. Having clarity at the very start of an AI project on what their end goal is for using that technology is essential.
Police forces must also consider how governance and oversight will work when using AI for predictive policing. They must be clear about who is accountable and weigh up whether introducing an ethics panel is necessary.
The learning process
It’s vital that police forces learn both from each other and other industries. A sector really making headway on the ethical challenges around AI technology is the health sector, where organisations are now using AI in processes like cancer screening. The financial services could offer guidance too, where a number of businesses are using AI and process automation to transform certain services.
The potential that AI and new technology offers modern policing is huge. But even though the sky’s the limit, the lawful soundness of AI needs to be front and centre of the conversation before the police takes any of these technologies any further.
The focus needs to be around harnessing everything they can achieve, but in a fair, responsible way that helps keep communities safe and secure.
Published date: 25th May 2022
This article and related document links do not purport to be comprehensive or to give legal advice. While every effort has been made to ensure accuracy, Risk Management Partners cannot be held liable for any errors, omissions or inaccuracies contained within the article and related document links.
Readers should not act upon (or refrain from acting upon) information in this article and related document links without first taking further specialist or professional advice.
Risk Management Partners Limited is authorised and regulated by the Financial Conduct Authority. Registered office: The Walbrook Building, 25 Walbrook, London EC4N 8AW. Registered in England and Wales. Company no. 2989025
Sources
The decade about long-term partnerships
The last 25 years have seen the completion of some spectacular infrastructure projects, what does the next decade hold?
Managing wellbeing and mental health is critical for the education system where the provision of support services has often been lacking.
UK housing trends have seen significant shifts during the past quarter of a century, a period which has seen the average house price escalate by more than 400%.
For information on how we use your personal data please refer to our UK Privacy Notice | EEA Privacy Notice.