In recent weeks there has been a lot of attention in the news about the use of algorithms by the government to detect fraud and crime. Congratulations! I’d say we have a government that’s getting more efficient and moving with the times. It would be much more disturbing to learn that the government still does not use predictive algorithms.

Yet in the media, this issue was highlighted from an entirely different perspective. For instance, the Dutch news agency NOS published this article, which in English is titled : “Government uses algorithms on a large scale, ‘risk of discrimination’. This article went on to state that the use of predictive algorithms involves a high risk of discrimination. This article led to indignant reactions from readers, which made it clear that the discussion on the use of algorithms is to a large extent guided by emotions. In doing so, the fact that the title of the article is tendentious to say the least, but is also factually incorrect, seems to have been overlooked.

Better and faster than people

The literal meaning of the word discrimination is ‘the act of making a distinction’. And that’s exactly what an algorithm does. It classifies data on the basis of their relationships into characteristics. And it does that much better and faster than people are able to do. But if you take the literal meaning of the word discrimination as a starting point, the assertion that the use of predictive algorithms entails a high risk of discrimination is nonsensical. You would then have to state: ” Algorithms discriminate, that’s what they’re made for.”

Nevertheless, in a social context, discrimination stands for something quite different. This is about making illegal distinctions (on the basis of gender, religion, conviction, sexual orientation, etc.). And that’s exactly what an algorithm does not do. An algorithm always produces the same output with the same input. It is amoral and cannot therefore, by definition, make an illegal distinction. Popularly put; an algorithm is not subject to a night’s sleep deprivation or an unpleasant experience with the downstairs neighbor. Whereas people are. Still, the confusion is understandable. An algorithm is created – and learns – on the basis of data. And that is where the difficulty lies. Data is not free of human influence and can be ‘biased’ in many ways. It is therefore quite possible that there are aspects hidden in data that lead to discrimination.

Simple to detect

So just as discriminatory aspects may be hidden in the human process, they may also be hidden in data. The main difference, however, is that discrimination within the human process is very difficult to detect and correct, as we have learnt from history. Discrimination within data, on the other hand, turns out to be relatively easy to detect, and also much easier to correct. Algorithms are able to contribute to this.

That is why, on the basis of the social significance of the word discrimination, I would like to make the following point: Algorithms do not discriminate. Provided they are controlled by people, they can contribute to a society wherein everyone shall be treatedd equally on equal terms.

“All persons in the Netherlands shall be treated equally in equal circumstances. Discrimination on the grounds of religion, belief, political opinion, race or sex or on other grounds whatsoever shall not be permitted.” (Article 1, Dutch Constitution)

 

About this column:

In a weekly column, alternately written by Eveline van Zeeland, Jan Wouters, Katleen Gabriels, Maarten Steinbuch, Mary Fiers, Lucien Engelen, Peter de Kock, Tessie Hartjes and Auke Hoekstra, Innovation Origins tries to find out what the future will look like. These columnists, occasionally supplemented with guest bloggers, are all working in their own way on solutions for the problems of our time. So tomorrow will be good. Here are all the previous episodes.

Become a member!

On Innovation Origins you can read the latest news about the world of innovation every day. We want to keep it that way, but we can't do it alone! Are you enjoying our articles and would you like to support independent journalism? Become a member and read our stories guaranteed ad-free.

About the author

Author profile picture Dr. Peter de Kock is adept at combining data science with scenario planning to prevent crime and enhance safety. De Kock graduated as filmmaker from the Film Academy of the Amsterdam School of the Arts where he mastered the art of creating scenarios for feature films and documentaries. After receiving a master’s degree in Criminal Investigation at the Police Academy, he was offered a position within the Dutch National Police force where he served as acting head of several covert departments. Within this domain he was able to introduce (creative) scenarios to anticipate and investigate crime. In 2014 De Kock combined art, criminal investigation, and data science in his dissertation "Anticipating Criminal Behaviour" with which he earned his Doctorate at Tilburg University. De Kock is founder and director of Pandora Intelligence, an independent security company specialized in security risks. The company uses a scenario-based approach to discover narratives in unstructured data, which helps (non) governmental organisations to mitigate risks and enhance opportunities.