hands off © Tesla
Author profile picture

A while ago, I attended a research presentation on autonomous cars by researchers at the Stuttgart University of Applied Sciences (DHBW). I learned that there are different levels of autonomous driving.

The most basic level is ‘hands-on‘, in which the human being controls the car completely by himself and is supported by the car while driving, for example by adaptive cruise control. A higher level we have ‘hands-off‘. The car can steer itself, but the driver must be able to take over the steering wheel at any time. It is important that the driver keeps his eyes on the road. In practice, this turns out to be difficult. The researchers at the Stuttgart University of Applied Sciences discovered with the aid of mobile eye tracking that most drivers already let their eyes wander after a few seconds, despite the fact that they were explicitly given the task of keeping their eyes on the road. Level 2 is, therefore, a level that we better skip.

On to level 3: ‘eyes-off‘. At this level, the car is programmed in such a way that it is no longer necessary for the driver to constantly focus on the road. Basically, the driver can sit back and read a book. Should a critical situation arise, the car will inform the driver, who can immediately take over the wheel.

autonomous driving Levels of autonomous driving (developed by the Society of Automotive Engineers)

Only in the upper two levels, we see the full range of autonomy. No driver intervention is required; the cars are programmed in such a way that they can do everything themselves. On level 4, ‘mind-off‘, there is still a steering wheel, but on level 5, the buyer of the car can also simply choose to leave out the entire steering wheel. In fact, on level 5 the whole driver has become superfluous. The cars on levels 4 and 5 have been morally programmed in such a way that they can make their own choice for every situation, including those on the interface between life and death.

Of course, this is the (near) future. The question is not so much when, but rather how. According to the Stuttgart researchers, Tesla’s autonomous cars are programmed in a more risk-oriented way than Daimler’s cars. The Daimler’s risk-averse moral programming ensures greater safety, not only for yourself as a “driver”, but also for other road users. Nevertheless, Tesla is more popular with most test drivers at Stuttgart University of Applied Sciences, not because of its moral programming, but rather because of its attractive user interface design.

Design, therefore, seems to be winning over moral programming. That made me think: will the way in which technology is morally programmed ever be part of consumers’ buying arguments? Will the moral programming of smart technology become part of the elevator pitch of an average company? In a few years’ time, will we hear consumers say ‘I have chosen the autonomous Mercedes because the way it is morally programmed fits better with my principles’? Or is there perhaps going to be a choice guide that will help us to interpret the differences in moral programming, just as it now helps us to interpret the differences between political parties? I’m really curious!

PS for everyone who is interested in all modern mobility services, including autonomous driving: come to the automotive campus in Helmond on Sunday afternoon, June 2 to the Mobifest: all interesting new mobility developments on a public day, prior to the ITS Europe Congress that will be organized next week in Eindhoven and Helmond.

Mobifest
Mobifest Helmond

About this column:

In a weekly column, alternately written by Eveline van Zeeland, Jan Wouters, Katleen Gabriels, Maarten Steinbuch, Mary Fiers, Carlo van de Weijer, Lucien Engelen, Tessie Hartjes and Auke Hoekstra, Innovation Origins tries to find out what the future will look like. These columnists, occasionally supplemented with guest bloggers, are all working in their own way on solutions for the problems of our time. So tomorrow will be good. Here are all the previous episodes.