16.08.20
Police forces around the world keep trying to use machine learning to 'predict' crime. It rarely ends well. Here is another example, reported by Wired:
A flagship artificial intelligence system designed to predict gun and knife violence before it happens had serious flaws that made it unusable, police have admitted....
Last month, in the wake of the global Black Lives Matter protests, more than 1,400 mathematicians signed an open letter saying the field should stop working on the development of predictive policing algorithms. ...
“In the worst case scenario, inaccurate models could result in coercive or other sanctions against people for which there was no reasonable basis to have predicted their criminality – this risked harming young people’s/anyone’s lives despite the clear warnings – however, it is good to see the team having evaluated its own work and identifying flaws from which to start again,”
On closer inspection it seems that West Midland police have actually handled this fairly well. They have carried out a exploratory trial and have been very wary of potential software bias. It appears that if it does not pass those test it will no longer continue. The project has been scrutinised by an independent ethics committee who have published their findings. The model has been assessed using the algo-care framework which assesses algorithms for biases. While the field of predictive policing as a whole may be fairly questionable, and there may be a good argument for abandoning it altogether, the way this project was conducted looks like a reasonable one.