®Eyesynth
Author profile picture

Technological advancements have created new opportunities for enhancing the quality of life for people with various disabilities. The Spanish start-up Eyesynth wants to improve day-to-day life for the blind.

The start-up has designed a pair of glasses which work as an audiovisual system for the visually impaired. The device is connected to a microcomputer and it records the surrounding environment in three dimensions. This is translated into intelligible audio that is then sent to the person wearing the glasses. Consequently, they work kind of like a pair of glasses that read the room, as it were.

The technology involved was developed and designed by the start-up itself. The glasses have three core features. They work in full 3D, which means that they allow the user to identify shapes and spaces. But also the ability to sense depth and locate objects accurately. Moreover, there are no words involved during this process – the sound is completely abstract. It is a new language that the brain is able to assimilate, and it is very easy to learn. Lastly, the sound is transmitted through the bones of the head. This frees up their ears for their regular range of hearing. This ensures that any subsequent listening fatigue is avoided.

Because the brain is capable of assimilating this information process fairly quickly, the blind person can soon wear the glasses and is able to focus on conversations or any other activities.

Innovation Origins talked with CEO Antonio Quesada, here is what he had to say:

Antonio Quesada, CEO ®Eyesynth

What was the motivation behind Eyesynth?

The start-up stemmed from two distinct routes: one ideological, and the other came from a technical challenge. The ideological route is simple. How is it possible that despite the enormous range of technologies available today, there is still no technological standard beyond the guide dog and the cane? We are firm believers in the “humanist technology.” This refers to technology which concerns the day-to-day problems that people face. That makes us all want to go the extra mile. Regarding the technical challenge, there was one key question: How can we provide spatial information to a blind person in a way that is both easy to understand and instantaneous? We are passionate about challenges, and it was while seeking a solution to both these questions that we founded Eyesynth.

Can you tell me about the technology, how does it work?

The fundamental design premise was that we had to create a system which felt very natural in use. That’s why we had to rely on existing mechanisms that are available in nature. The technical principle which we base it on, is that of “synesthesia”, which means “crossed senses”. When we are born, absolutely everyone is 100% synesthetic. This means that we can smell sounds, taste colors, hear images and all kinds of mishmashes of senses. For practical reasons, the developing brain disconnects certain combinations and only keeps those that are most useful within our environment. The curious fact is that up to 14% of the population has some kind of light synesthesia. Often people who have it don’t realize it, as they assume it is a natural process. In my case, I am slightly synesthetic when it comes to music and images. For me, every sound has a concrete shape in my imagination. Ever since I was a child, I am able to remember complex sequences of music thanks to the shapes that they form in my mind. The “Eureka! moment” came when I asked myself the following question: what would happen if I were to reverse this process? I mean, if I extract real geometric data from the environment and turn it into sound – could a person instinctively interpret that? The immediate answer is yes. We did an initial version of the image-to-speech algorithm and tested it with a friend’s nine-year old son who was born blind. The results were amazing. Then we tested it amongst groups of blind people, and the results were equally as good. That’s when we knew we were onto something important.

So, the Eyesynth’s original motivation was to create smart glasses for this child. But we soon realized that there was clearly a social need for this type of technology. We went on to form our own company with the aim of reaching as many people as possible. We are responsible for the development of both the software and hardware for our technology.

What has been the importance of Eyesynth?

Our goal is to expand blind people’s mobility and independence. This project has provided us with the opportunity to meet a lot of wonderful people with exceptional qualities. We want our technology to serve as a stimulus for showing the value of these people to the rest of the world. In neighboring countries, being blind is not an impediment to leading a full life. Whereas in other countries, unfortunately, being blind automatically excludes you from society. In these cases, we are convinced that people are cast aside, even though they have wonderful qualities that could contribute a lot to their society. That’s why we want to be the instrument that helps people reach their potential.

How is a blind person’s quality of life enhanced with Eyesynth?

From the start, our main goal has been to focus on the issue of navigating and recognizing the environment. We have succeeded in designing a system that does not use words, but a sound signature similar to the sounds of ocean waves that “changes its shape” according to what the glasses’ cameras record. As no actual words are used, there is no language barrier. Which means it can be used in any country. The system offers analysis of an amplitude radius of 120° up to a distance of 6 meters, updating data 60 times per second. This means a lot of information is available in real time. It is very important to note that we cover areas that a cane or guide dog cannot cover. As in obstacles up in the air as opposed to those on the ground, e.g., awnings, traffic signs or tree branches.

Having developed the navigation system, we now plan to expand the system with software functions such as facial recognition, text recognition and a lot of other new features that we will roll out over time.

[youtube https://www.youtube.com/watch?v=n7jC_127nGc]

What makes EyeSynth different from other similar startups?

Our technology is radically different from other offers on the market. We don’t base our recognition system on spoken language but take advantage of the power of the user’s brain to interpret the environment. It’s a real-time system, so response is immediate. On the other hand, the acoustic system we use is cochlear. We transmit the sound through the skull directly to the cochlear nerve. With this, we avoid having to cover the ears with headphones or earbuds. Plus, we eliminate auditory stress during lengthy listening sessions.

What has been the biggest obstacle that EyeSynth has had to overcome?

The image-to-speech algorithm is tremendously complex, and a massive quantity of data is required in order to be able to process it. This invariably leads to a huge amount of energy and computational power being used. That’s why we had to develop our own hardware capable of doing these high speed calculations while using a low level of energy. The challenges regarding both the software and hardware have been very intense.

Did you ever consider giving up?

In very complex projects with small teams, the ups and downs are more noticeable than in large companies. We have had to devise many solutions – in the areas of mathematics, machine vision, computer architecture, or ergonomics. And of course, how to finance all of this. We have become accustomed to finding ourselves in front of seemingly insurmountable walls. But with time, focus and hard work, we have seen that these walls can be torn down. There have been really tough times. Yet the team’s perseverance and the passion we have put into our work have helped us get to where we are today.

What has been the most rewarding moment?

Working on a project like this provides us with plenty of wonderful rewarding moments. Each week, we have a day allocated to visitors who want to test our prototypes. It is amazing to welcome people from other continents who come over just to spend a couple of hours with us and test the technology. Their personal stories, their resilience, but above all, the moment when we see that our technology works for them, well that all feels incredibly amazing to us.

[youtube https://www.youtube.com/watch?v=sBw1KaG2OFo]

What can we expect from you in the coming years?

We are currently busy with the manufacturing process as well as arranging distribution channels. We can’t wait for our glasses to reach the streets. We are eager to hear from the blind community to tell us what new features they would like for their glasses. They will then be able to do this through our internet forums.

Eventually, we want to become the technological mobility standard for blind people. We want to create a solid community that shares their experiences and helps us craft technologies and products that really does make their lives easier.

Can you tell me a bit about the feedback you’ve gotten?

The response we have received so far from people who have tried our smart glasses has been fantastic. We are amazed at the human being’s ability to adapt to our technology. Users acquire a level of performance and accuracy that never ceases to amaze us. This is largely one of the reasons why we are moving forward in our mission to bring this technology to as many people as possible