© SIRE
Author profile picture

I am a fan of the ‘be nice’-campaign of SIRE (#doeslief). Especially the ‘cold facts’ in the campaign appeal to the imagination: “146.571 tweets with the word ‘cancer’ in 2018“, “each year 8% of the public transport employees are spitted on“, “in 2018, 53.265 times the word ‘asshole’ was tweeted“, “100% of the referees are insulted“, and “72% of road inspectors regularly get shown a middle finger“. Unfortunately, the campaign is necessary; we are seemingly less and less nice to each other. #Benice

Maybe even more is needed than just a #benice campaign? In an article published in ‘Artificial Intelligence and Society‘, I read about a robot that can help you to be nicer to other people; the #benice robot. It’s an interesting thought to program robots to help you be more like your environment, but of course there are some issues here. Different ethicists and tech philosophers will question a #benice-robot. After all, a #benice-robot, however effective it may be, seeks out the limits of influence as much as possible and has a paternalistic trait. From various studies, we know that people are very inclined to follow commands from robots, even when they are strange or inappropriate. From my own research, I know that the conversational style, and therefore the character, of the robot influences the extent to which it is trusted. People are, therefore, very sensitive to the behaviour, style and remarks of a robot, so extra caution is always a necessity in this context.

However paternalistic it may be, when people apparently can’t be nice to other people, I’m quickly inclined to say that we can use all the help there is. So also the help of the #benice-robot. Extra attention should be paid to the design principles behind the robot. In other words, what framework, what mindset and what character do you give the robot? Can he only coach you friendly to nice behaviour, or can he also get angry with you when you are unfriendly? And what advice do you let the robot give you when someone else is unfriendly towards the person or the robot itself? Do you only let the robot give advice about the moment when you could do nice to the other person or does the robot interfere in the content of any interaction?

There is a big difference between ‘be nice‘ and ‘be wise‘. Maybe it is a step too far to let a robot coach people to more friendly behaviour, but can we use robot assistants to prevent unfriendly behaviour? Exactly, a robot that helps you count to 10 from time to time. #bewise

Eveline van Zeeland is an associate lecturer in Smart Marketing and Strategy at Fontys University of Applied Sciences. She is the author of Basisboek Neuromarketing and columnist for Fontys BRON and Magazine Clou. From now on she is also a columnist for Innovations Origins and gives us her own sneak preview of the future. Read an interview with Eveline van Zeeland here.

About this column:

In a weekly column, alternately written by Eveline van Zeeland, Maarten Steinbuch, Mary Fiers, Carlo van de Weijer, Lucien Engelen, Tessie Hartjes and Auke Hoekstra, Innovation Origins tries to find out what the future will look like. The seven columnists, occasionally supplemented with guest bloggers, are all working in their own way on solutions for the problems of our time. So tomorrow will be good. Here all previous episodes.