Author profile picture

Dr. Katleen Gabriels is Assistant Professor at Eindhoven University of Technology. She is specialized in computer ethics. In 2016, her book “Onlife” was published (in Dutch), in which she analyzes the potentials and pitfalls of the Internet of Things, digitization, and big data. Katleen is an elected steering committee member of Ethicomp, the international organization around ethical computing. Jaxenter interviewed her because of the keynote she will give at the Machine Learning Conference next month in Berlin.

By Melanie Feldman, Jaxenter

Our first ML Conference will debut in December in Berlin. Until then, we’d like to give you a taste of what’s to come. We talked with, Dr. Katleen Gabriels, Assistant Professor at Eindhoven University of Technology about how algorithms influence our daily lives and why ethics are essential to the development of machine learning.

JAXenter: In your ML Conference keynote, you will talk about the influence of algorithms on our daily life. What is your personal opinion towards this topic – do you think the influence of algorithms is under- or overrated?

 

Katleen Gabriels: This influence is definitely underrated. We already live in the era of the Internet of Things (IoT), where algorithms increasingly make decisions for and about us on a daily basis. Algorithms already decide on our love life on dating apps and dating websites, our potential jobs (as companies can use them to scan our resumes), and even in court cases.

Or consider for instance ‘recommender engines’ such as Google’s search engine: numerous people worldwide inform themselves daily about the world on a platform where algorithms decide which information you will or will not see. And the company keeps the algorithms themselves secret. Unfortunately, still too many people think that the ranking of the results is based on ‘reliability’. We should increase awareness about this, not only about algorithms but also about ‘search engine optimization’, especially in an IoT-era with persuasive and predictive technologies that can easily violate our autonomy in undesirable ways.

JAXenter: You say algorithms can never be completely neutral because their creators (developers) are never neutral. What advice can you give developers to save them from falling into the trap of unwanted influence?

Katleen Gabriels: To realize that these algorithms are neither neutral nor value-free is an essential starting point. At Eindhoven University of Technology, where I work, all students (future engineers) have to take courses on ethics, such as engineering ethics. The non-neutrality of technology is an important part of these courses. The way you as an engineer design a technology influences how users can make use of it: this is just a simple example to illustrate that this is not a neutral process. With regard to algorithms, there is a plethora of examples that show how human biases slip into them, such as racist profiling in ‘precrime methodology’. Here as well it is important to increase awareness.

SEE MORE: Machine Learning — the new poster child for boosted productivity

JAXenter: An example of ML gone ‘wrong’ is chatbot Tay, which quickly started saying inappropriate things on the internet. Do you think that artificial intelligence needs a sort of moral guideline in the first place?

Katleen Gabriels: Definitely! And Microsoft should have considered this before ‘releasing’ Tay on Twitter. There are some positive developments: Google, for instance, has an ethics board on AI.

However, this moral guideline, or code of ethics, should be part of an extensive public debate, and not just one at the company, or in academic or expert circles only: we as a society have to reflect together on desirable and undesirable developments.

JAXenter: Where do you see the biggest potential for the positive use of artificial intelligence?

Katleen Gabriels: I welcome technological development and innovation, but this progress should go hand in hand with ethical progress (or at least not decay) and this does not happen automatically: we really have to work hard to attain it. AI can assist humans in so many positive ways that it is difficult to just pick one example. To give one, albeit a general one: AI offers great potential for healthcare, for instance in the analysis of complex data.

Dr. Katleen Gabriels will be delivering one talk at ML Conference which will focus on why algorithms and datasets are not neutral, as well as how we can anticipate and reduce undesirable consequences and pitfalls.