10.03.20
A man is wrongly accused of a crime by a facial recognition algorithm. The man is black, but the algorithms were largely trained on white faces. Where did things go wrong?
This is a good piece of journalism from the New York Times.
As some of the big tech firms stop making facial recognition available to police forces, this is one of the first documented cases of a wrongful arrest made purely on the basis of this technology. These kinds of applications of AI are going to be more and more prevalent. Surveillance is rising both by corporation who want to know our shopping habits, and the state that wishes to control us. The implications of using imperfect algorithms to extend the powers of the state are going to be with us for a while. It is something practitioners of AI need think hard about as we build our tools.