Make old family photos come alive. Have Joe Biden or Kim Jung-un sing a song and act in famous Hollywood scenes. Deepfake technology makes this all possible. This is done using clever software to create or manipulate images, sound and text. It all sounds quite innocent. Yet this technology carries a lot of risks. What about politicians suddenly shouting things they never said in real life? Or when your daughter calls you to transfer money, which later turns out to be software …
The technology is now so advanced that most people do not realize that the images have been tampered with. That’s according to research carried out by the University of Amsterdam in The Netherlands. The same research also found that these so-called deepfakes can negatively influence opinions. Experts warn of an infocalypse. If this continues, we will no longer be able to rely on our eyes and ears to judge what is real.
Dutch AI CoalitionHeader text
Just as the brain is still the most elusive part of our body, artificial intelligence is still very much uncharted territory. That our brains control our bodies is something we as humans have come to accept. However, this does not apply to the way in which AI is gradually taking over control of our society. We would like to have a few more vigorous debates about that. In a series of articles and interviews, Innovation Origins, in close cooperation with the Dutch AI Coalition, reveals what the average Dutch person feels about this all-important social revolution. How do we as humans keep our hands on the controls? The fears, the opportunities, the dilemmas.
In the opinion of future tech strategist Mark van Rijmenam, we as a society have a serious problem when we can no longer tell if videos are real or fake. “Besides the fact that you can no longer trust the images you see, a politician can also exploit deepfakes to deny certain statements,” he explains.
Reality or deepfake?
He believes that things have not yet reached that point. In many images, we can still see with the naked eye that they are manipulations. “Provided you pay attention of course, since images are becoming more and more realistic. Just like other technologies, the development of deepfake technology is advancing incredibly fast. People will no longer be able to tell the difference within one to three years,” Van Rijmenam predicts.
Jarno Duursma, a technology expert and author of the report ‘Deepfake technology; the infocalypse‘, is not blind to the risks of deepfakes either. Duursma already sees things that are indistinguishable from the real thing. Yet he thinks the dangers are overestimated. “The older generation in particular is still from a time when they trusted that whatever was in the newspaper was true. With the advent of social media, suddenly anyone could hurl information into the world. Including information that is not true. So we’ve been dealing with unreliable information on the Internet for some time now.”
Recently, scientists at the University at Buffalo released an AI tool that determines with 94 percent certainty whether something is a deepfake or not. To do this, the model ‘looks’ at the reflection in the eyes, among other things. Both experts agree that it will always be a ‘cat and mouse game’ when it comes to unmasking deepfakes. But even if it is discovered afterwards that something is a deepfake, the damage can be substantial. Van Rijmenam: “Think about the damage to companies’ reputations. Victims of fake revenge porn who are no longer accepted by their family. Or people who give in to blackmail resulting from manipulated images. Even if it is clear pretty quickly that these are deepfakes, the damage has already been done.”
Better representation of diverse products
Innovation Origins asked a number of Dutch people what they think about deepfakes and whether, in addition to the dangers, they also see opportunities for this technology. Like the experts, they cite fake news and identity fraud as the biggest risks. They are aware of this phenomena when viewing information on the Internet. Some respondents are concerned about what the consequences of deepfakes might be. According to them, opportunities lie in being able to ‘better imagine what something will look like’, ‘advertising’ and ‘making more and easier funny videos for the Internet’.
Not just drawbacks
Besides all the risks, both technology experts believe there are also plenty of upsides to deepfake technology. Van Rijmenam: “Using deepfake technology, you can help people get over their fear of swimming or other fears. By pasting their face onto a video, a kind of memory is implanted in their minds. Your brain doesn’t know if it’s true or not. It works the same as if you were to imagine yourself speaking in front of a thousand people. Then when you actually step on stage, your brain thinks, ‘I’ve already done this, I can do this!‘”
Duursma is careful: “This still needs to be researched, we don’t yet know if this is really how it works in our brain.” Other advantages are more obvious he says: “With deepfake technology, you can clone the voices of the voice actors of The Simpsons and continue making episodes long after they have passed away. You can bring amazing people who have died back to life. A movie with Elvis Presley? Why not! I even had a digital avatar of myself created that I can use for short video presentations. It doesn’t work perfectly yet, but it saves a lot of time. I no longer have to record a video of myself. I type the text and then the AI system makes a video to go with it.”
Synthetic media; scripts ‘written’ by the computer
Duursma prefers to use the umbrella term synthetic media for deepfakes. “These are renderings made or manipulated by AI software. From paintings to film scripts and even digital individuals who can speak in different languages. Basically anything we can think up, but created or modified by AI. This software makes creativity accessible to everyone. It allows you to generate thousands of ideas or perspectives and choose from any of them. It’s a goldmine of ideas.”
For instance, there is already an AI model that conjures up new images on the basis of a written text. Or comes up with ideas for new start-ups. These technologies use the GPT-3 language model, which ‘wrote‘ an article in The Guardian last year. According to Duursma, we will work increasingly more with these kinds of systems in the future. “People are afraid of bing made redundant. That’s a kind of primal feeling. While we already lean on technology for so many things. I don’t remember phone numbers anymore, for one thing. To me, machines with imagination that generate new ideas for us are not a scary idea at all. It gives everyone access to creativity.”