AI-generated image
Author profile picture

Have you ever pondered whether artificial intelligence could ever possess consciousness, akin to humans and some animals? Or that this might already be the case at this very moment in time? As technology advances, the age-old debate surrounding AI and consciousness has rekindled with fervor. To shed light on this intriguing question, we spoke with two experts, Jan Broersen and Brad Saad, who hold contrasting perspectives on the topic.

  • The debate on AI consciousness continues, with differing perspectives from experts.
  • Consciousness remains a complex, philosophical, and scientific topic.

What does consciousness actually entail? 

Consciousness is a complex and multifaceted mental phenomenon that encompasses our awareness of ourselves and the world around us. It involves our ability to think, perceive, feel, and experience subjective states. The nature of consciousness remains a topic of philosophical and scientific inquiry, with ongoing debates about its origin, function, and potential in artificial systems like AI.

Brad Saad

Senior research fellow in philosophy at Oxford’s Global Priorities Institute

Saad was also a researcher at the University of Utrecht, where he focused his work on various facets of the philosophy of artificial intelligence. His particular areas of interest encompass the epistemology, ethics, and metaphysics of future digital minds, with a strong commitment to ensuring a positive long-term future. In 2019, he completed his PhD, which centered on non-reductive theories of consciousness.

Jan Broersen

Professor at the University of Utrecht, specializing in logical methods in Artificial Intelligence (AI).

His academic journey, rooted in mathematics, logic, and computer science, brings a unique perspective to studying AI from a humanities angle. His primary research interests include responsible AI, knowledge representation and reasoning, and developing logical theories of agency.

This is an article from IO Next: The Brain. This magazine is full of stories about scientists, entrepreneurs, and innovations share one common goal: to better understand the most complex system there is.

Let’s not beat around the bush. Are we currently dealing with conscious machines?

Broersen: “I think AI has zero consciousness. I believe that because I know how those machines are built. I know what’s in them. And I don’t see how those techniques are sufficient to create something that we call consciousness.

For instance, there is a debate about whether consciousness has emerged in language models like ChatGPT. The general public is being led astray because they think ChatGPT can actually ‘think’. But looking closely at how ChatGPT is put together, it ‘reads’ millions of pages. And it seems like ChatGPT also has information about itself, and that creates an illusion of awareness. When, in fact, you’re hearing the programmers talking. They just explicitly put those texts in there.

So, if we zoom out to AI in general again, the computers we made so far are too simple for consciousness to exist. What goes on here physically with us is very complex; Too complex to put into a computer.”

Saad: “I consider it unlikely that any current AIs are conscious. But I don’t believe we are in a position to be sure about this. There are many different sources of evidence to look at when we try to form a view about whether existing systems are conscious. And there are methodological puzzles about how to evaluate evidence. But again, we can’t know for sure.

Something that makes me think that there’s, maybe, double-digit probabilities of AI systems that are conscious in the next decade, is that we’re giving them not only language abilities but also starting to develop multimodal capabilities. So, AI becomes more and more able to perceive its environment. Therefore, I think that development in AI is heading toward creating systems with more and more candidate markers for consciousness.”

Do you think that to establish consciousness, it needs to resemble humans? 

Broersen: “This is an extremely tough question. Consciousness is often defined as that which coincides with what it is like to be someone. I think consciousness will be something like what it is like to be me. But the question then is: how far can we go in imagining other kinds of consciousness? We can ask whether there is also such a thing as what it is like to be a dog. And here we are, completely in the dark. I think there is. But dog consciousness will be different simply because dogs interact with the world differently. They seem to be able to smell much better, for example. Yet, dogs are still close to us in a way. I see no proper way to think about types of consciousness that are completely different from those of humans and animals.”

Saad: I don’t think the systems must have the same architecture as humans to be conscious. However, I do think we’re better positioned to evaluate consciousness in systems that resemble humans. 

An interesting thing to think about is that AI systems (if they can be conscious at all), is that they could have many more experiences than we have. Even with existing computer architectures, the processing speed vastly exceeds the part of the brain processing that’s relevant to consciousness. Those computers can be scaled up to process more information than the brain. The brain is subject to chemical and biological limitations on its processing, whereas computer systems are not subject to those limitations. 
For instance, AI systems can, potentially, suffer in more extreme ways. Human suffering, as terrible as it is, is still subject to biological limitations that limit how awful it can be. But if we create AI systems that can suffer, they may not be subject to such limitations.”

Are we paying enough attention to the moral aspects of developing AI?

Broersen: “In any case, engaging with ethical dilemmas surrounding AI, is wise. If I am wrong about consciousness in AI, and of course, I’m not sure either, it has huge ethical implications. These ethical implications are almost incalculable. Then suddenly, it can become a bad thing to, for instance, turn off your computer. Because then you are denying something conscious, its existence.”

Saad: “I don’t feel like we think enough about the moral aspects involved in developing AI. There’s a chance that we’ll create a huge number of conscious AIs without knowing it. And if we do that, it wouldn’t be good if we just continued treating them like we currently treat cell phones or laptops. In that case, we could be committing grave wrongs against them on a large scale. So, I take this risk very seriously. If we slowed down development or sped up our understanding of consciousness, the risk could be reined in.”

Would it be likely that while we further advance AI, we will eventually have conscious machines?

Broersen: “I expect the chances to be minimal. I only consider that likely if we create very different machines from what we have right now. Consider, for example, very sophisticated, fully developed quantum machines. In that case, there might be a change to the accepted model of computation itself.”

Saad: “I regard whether conscious AI arrives in the next 100 years as a coin flip. I’m skeptical that we need quantum computers for it to happen. Because, as far as we know, the brain is not exploiting quantum phenomena to generate consciousness.”

Do you think humans will empathize with AI and treat them ethically when we recognize them as conscious?

Broersen: “On the one hand, humans do not have a good track record for benevolently including other kinds in their societies. So there is certainly reason to be pessimistic about this. On the other hand, I also think that the core of a solution to this potential problem lies precisely in the possibility that the consciousness of machines will be very much like ours. When they are exactly like us, we might recognize that, and the threshold for including them in our society will be small.”

Saad: “We might treat them well, but empathy for machines doesn’t necessarily correlate with whether they’re conscious. Empathy might correlate with whether we think they’re conscious. It might be triggered by things like looking human, having a face, having eyes. But those are not good markers in AI systems for consciousness. To evaluate whether AI systems are conscious, we have to look inside them and see what the underlying workings are.”