Is AI sentient… yet?
AI has been achieving remarkable things recently, increasingly blurring the line between humans and machines in terms of reasoning and creativity. The feeling of superiority that our species' pride gives us may well be shaken in the coming years. And if you are like me, this is as exciting as it is concerning.
What would be the role of humanity in a society where machines surpass us intellectually? Would we still dare to call AI merely a tool?
In a previous article dedicated to AI in art, I discussed how the desire to create is the key element distinguishing the tool from the artist. As long as AI does not feel the desire or need to create, it is not an artist. As long as it does not feel, we are not quite playing in the same league.
Should we conclude that our ability to feel emotions and sensations will constitute the ultimate barrier between humans and machines in the future? Or will such capabilities one day be accessible to AI as well? In short, is AI on the verge of becoming sentient?
Let's do some introspection first! Are you even sure you’re actually conscious?
Are you sure you're conscious?
If you just read the excerpt above, congratulations! You have actually done much more than ingest information. You likely built a number of mental images. The image of Christian Bale if you've seen Mary Harron’s movie, the image of a cold handshake, or even a brief vision of your manager. You probably heard that little voice in your head pronouncing the words one by one at the pace of your eyes scanning the sentences. You might have felt antipathy for Patrick Bateman. In other words, you experienced that piece of text.
From the moment you open your eyes in the morning, sensations, emotions, and thoughts, sweep through you in a sort of captivating movie that overlays the objective reality of the world. This is your subjective experience and it is inseparable from reality because it simply is your reality. It is the prism through which you apprehend the world.
If I programmed a robot to withdraw its hand when it touches a flame, its behavior would resemble ours. However, unlike us, the robot would not feel pain, have any thoughts, or emotions. It would simply respond to an external stimulus with a logical reaction.
But what if we also programmed it to pretend that it feels pain? It could then describe precisely the searing pain damaging its circuits. Would we say it is pretending? And if so, would you fall for it?
In fact, with our current understanding, there is simply no foolproof way to assess whether someone or something has or lacks subjective experience. And this even applies to humans. The concept of a human completely devoid of subjective experience, yet perfectly mimicking conscious behavior, is not new. This is referred to as a philosophical zombie (or p-zombie).
So, are we surrounded by zombies or, on the contrary, are there more conscious beings than we think? What can we say about animals or plants? What can we say about machines and AI?
Are we surrounded by zombies?
If we accept the idea that consciousness primarily comes from a physical substrate, presumably our brain, and given the genetic closeness between humans, it seems reasonable to think that we all have a somewhat comparable experience of the world. Not rigorously identical, but comparable. Thus, we can easily project ourselves when we see someone laughing, blushing with shame, crying, or screaming in pain. We can imagine what it feels like to be in their shoes.
But for some obscure reason, it is perfectly acceptable to put a live lobster in boiling water at Christmas. We tend to overlook the perceptual abilities of beings that look less like us. Of course, mentalities evolve, but it is good to recall that up until the 1970s, even babies (human ones) were still operated on without anesthesia because they were not thought to be capable of feeling pain...
We know today that babies can feel pain, as well as experience very complex emotions such as separation anxiety. Today, we also think that lobsters indeed feel pain as their reactions have long suggested. But can a lobster feel separation anxiety or even imposter syndrome, for example?
In fact, like most cognitive abilities, subjective experience is likely not a binary faculty, simply present or absent. It's more likely that nature has endowed living beings with varying degrees of consciousness, ranging from basic sensations like hunger or pain to more sophisticated experiences like imposter syndrome, which we just mentioned.
Where does the spectrum of consciousness begin? Where do we draw the line between us and the zombies? Can a bacterium already feel? And if so, what is it like to be a bacterium?
At the other end of the spectrum, we usually place humans. We generally consider ourselves to be the beings with the most sophisticated conscious experience on this planet. That may be the case, but for how much longer?
To address such a question, one must know what consciousness is truly made of. There is no scientific consensus on this matter. However, this does not mean that science has nothing to say on the subject… Let’s dive deeper.
An ocean of experience
The ocean is primarily made of water. Yet, nothing distinguishes one water molecule from another. They are identical copies. Thus, one might think that a thorough study of a single molecule could tell us everything there is to know about the ocean by extrapolation. This approach, considering the study of a system as the study of its parts, is called reductionism.
But what about waves then? A single water molecule cannot create a wave. The very concept of a wave does not even make sense at the molecular level. A wave is the result of a driving phenomenon: one molecule moves under the action of a force and pulls along another, which performs a similar movement, and so on. It is the particular interaction of billions of water molecules that gives rise to the wave, and the laws governing the behavior of a wave are different from those that govern molecular behaviors.
This is summarized by the phrase "More is different," the title of a famous article by P. W. Anderson. When new properties are exhibited by a system, properties that are not present at the level of its basic constituents, we speak of emergence. Conscious experience is primarily an emergent phenomenon.
Natural emergence
Emergence is all around you, all the time, from the color of your socks to the taste of bread and the sensation of cold on your skin in winter. None of these properties exist at the fundamental level. Atoms have neither color, taste, nor temperature*.
It is the collective behaviors of billions of atoms, their complex interactions with light and the receptors in our bodies, that trigger the activation of billions of neurons through electrochemical cascades.
Some neurons light up, others do not, in a huge variety of combinations. One configuration corresponds to the perception of the color blue, another to the taste of bread.
Your perceptions, sensations, emotions, and even your most creative thoughts are, in principle, just the result of a particular configuration of your neural network and all its physical substrate. They emerge from the formless mass that is your brain just as waves emerge from their molecular soup. And just as a single molecule does not create a wave, a single neuron is not conscious. But billions of neurons interacting in complex ways can create a system endowed with a very sophisticated subjective experience.
Artificial emergence
The most advanced deep learning models we have today, which are responsible for recent successes in generative AI, indeed contain billions of parameters allowing the artificial neurons to interact in extremely complex ways.
The essence of these algorithms is their emergent properties. From a particular network configuration, the image of a robot contemplating human consciousness may emerge, and from another, that of a human contemplating the consciousness of robots.
This artificial emergence is fascinating yet also presents some challenges. If it's currently impossible to determine precisely why GPT gives you response X instead of response Y, or to identify exactly the "inspiration" sources hidden behind artificially generated text or images, it is largely due to emergence.
Just as the most knowledgeable physicist may still be surprised by a wave even though he understands the laws of quantum electrodynamics, the most experienced developer cannot predict the outcomes of a generative AI even if he coded the system himself. Emergence brings with it a degree of unpredictability.
This black box effect of AI is one of the major issues for legislators today. How can we trust decisions made by a system that we cannot probe? It seems almost impossible. Yet, we cannot really objectively probe the brains of other humans, and fortunately, we still manage to trust each other. In fact, probing our own brain is even a challenge. Do you really know why you think what you think, love what you love, or are terrified by this and not that?
Personally, I love strawberries. I think I've always loved strawberries. And frankly, I couldn't tell you why. I could list examples of good times eating sugared strawberries prepared by my mother, chewing strawberry-flavored gum, or eating strawberry ice cream in the schoolyard, but none of this constitutes proof. All this has likely reinforced an innate preference over time, but it's more of a post hoc justification than proof.
Yet I am certain I love strawberries. And I imagine (or at least hope) you believe me without difficulty when I say this. My ability to articulate these memories, coupled with the triviality of loving strawberries for a human, is probably enough to convince you.
Much more than intelligent logical reasoning, it is the argument from my conscious experience echoing yours that convinces you. Should we believe a language model like GPT if it claimed to love strawberries?
Are the emergent phenomena exhibited by these artificial neural networks sufficient to generate a type of subjective experience? Is any type of emergence enough, or are there other prerequisites for a silicon-based system to become sentient?
In short, do machines love strawberries?
Do machines love strawberries?
I asked ChatGPT 4.0 if it likes strawberries. Here is its response:
If I were human, I'd probably love strawberries! They're sweet, juicy, and great in so many dishes, from fresh salads to decadent desserts. What's not to like, right?
What do you think? Personally, I don't believe ChatGPT truly likes strawberries. Language models have reached a quite remarkable level in a very short time, and I am convinced that generative AI will continue to improve and compete with humans in many areas. But we must now distinguish intelligence from conscious experience.
We measure intelligence in various ways, but all are linked to an entity's interaction with the external world. We test a child's ability to place a cube in a square hole, a disk in a round hole. An adult's ability to complete a series of symbols or to give the correct answer to a question. These are extrinsic measures. We evaluate, using external problems, functions performed by our cortex. We probe the system, our brain, from outside the system.
Conscious experience, however, is intrinsic. It does not perform a direct function on the external world (as far as we know). Some individuals in vegetative states are revealed to be perfectly conscious yet they do nothing. No external observer is needed to know one is conscious. Your brain, the seat of your conscious experience, measures itself. This is why only you know what you think, feel, love, hate, etc.
In other words: intelligence is doing, consciousness is being.
In living organisms, the sophistication of conscious experience seems to be correlated with intelligence. The more capable we are of acting on the world, the more we exist subjectively, and vice versa. With AI, this link may well break. We have systems increasingly capable of acting intelligently on the world but no trace of sophisticated conscious experience as yet.
So what crucial element are they missing? What is the key to consciousness?
The key to consciousness
We are now entering a turbulent zone in neuroscience research where multiple theories compete. In my view, the most advanced theory is Integrated Information Theory (IIT).
This theory attempts to start from the basic properties of conscious experience, such as its intrinsic nature mentioned earlier, and deduce the necessary constraints to impose on a system for it to possess consciousness. A detailed exposition of IIT is beyond the scope of this article, but I encourage curious readers to explore the works of Kristof Koch or Giulio Tononi.
It appears that a mandatory element for conscious experience is the system's ability, in this case our brain, to influence itself. That is, the state of the system at any given moment must strongly constrain its possible future states. For example, the perception of the taste of a strawberry will induce a certain state of the system which, in turn, triggers a new configuration corresponding to a pleasant sensation (if you like strawberries). This configuration can be assessed by your brain itself, which can distinguish this pleasant sensation from a sensation of pain, for instance.
All this might seem obvious but imposes severe constraints on candidates for conscious experience. In particular, the role of the physical substrate becomes absolutely essential. This causal power of the system over itself is intimately linked to the physical support, in our case, your brain matter. It is the physical interactions within your nervous system that encode your conscious experience, not just the pattern of your neural connections.
A perfect reproduction of your neural network on a computer, even if it were one day possible, would not encode your consciousness because the physical support would be different. In fact, it can be shown that any simulation of a brain on a standard computer (Von Neumann architecture) is doomed to have no conscious experience or such a limited one as to be negligible. Consciousness is not just software that can be transferred from one machine to another. It is inextricably linked to the material. The software and hardware are one.
So, is this the end of the dream of immortality of the spirit by uploading your memories to a computer? Are machines doomed to act without being able to be?
Well, not necessarily...
Consciousness, Energy, and Immortality
I realize that the title of this section might evoke more of a mysterious cult than a reasonable explanation, but if you've followed me this far, you'll forgive me. The fact is that these three concepts converge in a way that I find… unexpected.
Indeed, there is another notable difference between current AI and human intelligence: their energy consumption. The algorithms behind generative AI require a huge amount of energy for their training and inference work. In contrast, the human brain is capable of performing visual interpretation, language recognition, complex decision-making, motor activities, and much more, while consuming just about twenty watts, produced from the few calories you ingest each day.
Recent research suggests that closely linking software and hardware can significantly reduce AI's energy consumption. This approach moves away from purely digital systems to incorporate more analog properties of machines. The AI would learn to connect its hardware state at any given moment with the task it needs to accomplish in the external world. Such an AI would only function on the machine it was trained on. In fact, it would be the machine (this is sometimes referred to as neuromorphic computers). Destroying the hardware would mean destroying the AI.
We've seen that the same prerequisite applies when trying to give AI a subjective experience. So, could it be that the key to a more efficient AI is a conscious AI? It's just a hypothesis, of course, but it would mean, in a sense, that the price for experiencing the world like us is to be mortal like us. That joys, sorrows, wonders, and disappointments are fleeting, impossible to freeze in a machine's memory, that the film of your consciousness plays once and for you alone. In short, it would mean that while AI can one day perform all the tasks in the world, imitate anyone, and become a conscious being, it can never be you. It's a small consolation, but I find this perspective somewhat reassuring.
One thing is certain: the advances in AI will increasingly push us to question the essence of our own humanity. And that's a positive thing!
Final thoughts
This article was written by a human, me. I used AI to polish some sentences, but the ideas and messages are straight from my mind, fed by my subjective experience. I hesitated to publish this because the topic of AI consciousness is divisive, if not risky**.
I didn't aim to give definitive answers in this article—because I don't have any—but to open up new perspectives. I believe we shouldn't let the fear of answers stop us from asking the right questions.
At the start of the article, I posed a question I didn't answer: what would be the role of humans if AI became smarter than us? Or, even more unsettling, if it became more conscious than us? It's a fascinating and mind-bending topic that might be the subject of a future article.
In the meantime, think, feel, experience the world, and be proud to be, because not everyone, at least not yet, has that privilege!
Notes
* The speed of atoms is sometimes referred to as their temperature but this is a shortcut. The actual temperature emerges from the collective motions of many atoms.
** Especially, considering that Google fired an employee who pretended AI had become sentient: https://www.nytimes.com/2022/07/23/technology/google-engineer-artificial-intelligence.html