Last June in an open letter, a number of American mathematicians issued a plea not to cooperate with the police. Among other things, they criticized predictive policing and facial recognition, as the data used to train AI systems is often biased. This can have discriminatory repercussions for minority groups, such as those in the black community. The initiators emphasize that, as mathematicians, they would in effect be adding a ‘scientific’ cachet to these policing practices, whereas a scientific basis is in actual fact lacking. “It is simply too easy to create a ‘scientific’ veneer for racism,” they wrote. The letter reads as a plea for ethics in AI and clearly points out that AI technologies, although rooted in mathematical models, are by no means ‘neutral’.

Read all the articles in our archive by becoming a member of Innovation Origins. Sign up here as a supporter of independent journalism!

Become a member!

On Innovation Origins you can read the latest news about the world of innovation every day. We want to keep it that way, but we can't do it alone! Are you enjoying our articles and would you like to support independent journalism? Become a member and read our stories guaranteed ad-free.

About the author

Author profile picture Katleen Gabriels is a moral philosopher specializing in computer ethics at Maastricht University. She conducts research into the relationships between morality and computer technologies.