Author profile picture

People talk with all kinds of things. Your computer, television, telephone or even with your car, but only very few of the things that are addressed, can reply specifically. At least not yet. This could soon change for cars. A car like K.I.T.T. from the 1980s series Knight Rider is no longer as utopian to a certain extent. Part two of our small series on intelligent speech recognition deals today with how the automotive sector can already change in the not too distant future.

“A great deal will also happen in the automotive sector, especially in the field of autonomous driving, via voice control. It will be more than just pressing buttons or manually operating any other things,” says Dagmar Schuller, CEO and co-founder of the Munich-based start-up audEERING. Voice control or language in itself is particularly important in the area of usability in the run-up to product development, explains Schuller. “That is, how satisfied am I with the equipment of my vehicle? This is a classic field of application in which we are active.”

Speech recognition is becoming increasingly important, especially in cars. Once the system is switched on, it becomes an intelligent passenger and hears everything that goes on in the car. For example, if it detects that the driver is sleepy, aroused, angry or stressed, the system can intervene. When you talk to him, he can judge the driver’s condition. “A stressed driver increases the risk of accidents tenfold,” says Dagmar Schuller. “The system can then, like K.I.T.T. in Knight Rider, automatically suggest switching on the autopilot. But it is not necessarily an intelligent decision based only on what the driver says, because perhaps he says nothing at all. However, the system may hear quarrels, swearing, yawning or perhaps even snoring noises in the car for a long time, then the content of what it says is completely irrelevant”.

Intelligent speech recognition is not limited to speech alone but includes all kinds of noises, in other words, it does a so-called acoustic scene analysis. “Does a child scream in the back, which can be an uncanny stress factor? Did the dog jump forward? Is the dog barking? Maybe there was an accident because you could hear a siren? Is anybody screaming in the car? Does someone argue all the time? When it comes to status detection, the driver himself is only one point in the overall picture.” The basic requirements for using this system are already met today, because all new cars have microphones, for example from the hands-free car kit, and microphones are all you need.

Intelligente Spracherkennung Auto

Another step towards intelligent cars that respond to human speech could be that in the future you can also control the car with your voice. “You have a fingerprint that is as unique as an iris scan or a normal, individual fingerprint. Every voice is individual and also has individual characteristics,” emphasizes Schuller. “You can imitate voices, but basically every voice is as unique as other physical characteristics of a person.” Such voice control could be an insurmountable obstacle for potential thieves. It would be independent of a keyword and would react purely to the individual characteristics of the voice, or “it can also be combined with other closing mechanisms such as keywords. The software is very flexible and adaptable.”

But how vulnerable would the software be to hacker attacks? The great fear of many people with autonomous cars is precisely that someone will outwit the system, take control and the driver becomes a helpless passenger and there is nothing he can do about it. Dagmar Schuller calms down. “The system is extremely stable and also has the advantage that we have two variants that we offer. One is via a web API, i.e. via the cloud, the second is an on-device, an embedded version for which you don’t even have to be online. All calculations and evaluations are carried out exclusively on the device, in this case in the car.”

Similar to Amazon Alexa, a connection only needs to be established if one wants to communicate externally; all other calculations would take place directly on the device and remain there. “This is also an advantage in the medical field, where very sensitive data is involved. You only have to leave the device for the evaluation of certain fingerprints or certain vectors, but you cannot draw any conclusions about who it is all about. So I decide 100% for myself whether or not to give my data to someone. Everything stays with me and I don’t have to worry that the insurance company will give me a bad contract, for example, because they know all my data.”

Nor do you have to be afraid that a hacker will get access to the system and perhaps let you hit a bridge pier simply because he can? “The device forms a self-contained unit,” says Schuller. “Depending on what you want to pick, you can make it bigger or smaller. Our software is also so small that it even fits on hardware such as a hearing aid, if you are only looking for very special features and everything can be completely calculated there, in the device itself.”

Photo: Piqzwa
Graphic: Statista

Related news about intelligent speech recognition:

Fighting depression and suicides with intelligent speech recognition

 

Also read: Alexa, ask BMW: “Are my Windows open?”

“Alexa, ask BMW, ‘are my windows open’?”