A new code of practice is necessary to manage the use of machine learning in policing according to a report by the Centre for Information Rights (CIR) at the University of Winchester.
The report follows a consultation workshop organised by the CIR in collaboration with the Royal United Services Institute and supported by the Higher Education Innovation Fund.
This event brought together participants from law enforcement, government, the legal sector, and academia as well as policy-makers to debate issues relating to design and deployment of machine learning within operational law enforcement environments. Topics of discussion included notions of fairness, elimination of bias, accountability, and output accuracy.
Report co-author and doctoral student at the University of Winchester, Petros Terzis, said: "The use of machine learning algorithms can potentially assist the police. However, the use of such a tool needs to be assessed and reviewed to address its influence on police decision-making. We need to make sure that we identify the problem and then look for a solution through technology. Not the other way around."
In addition, senior lecturer in law at the University and co-author Christine Rinik explained that use of the technology for predictive policing can raise other issues:
"We have to consider any impact on the fairness of a decision which may be based in part on the use of machine learning algorithms. Transparency, explainability and accountability must be examined."
The report concluded that there is a need for a new code of practice associated with the use of algorithmic decision-making systems within the policing context and recommended the creation of an appropriate independent authority to oversee this use.
The full report is available to download here. www.winchester.ac.uk/algorithmsinpolicingreport
Press Office | +44 (0) 1962 827678 | press@winchester.ac.uk | www.twitter.com/_UoWNews
Back to media centre