How to Tell If Your ChatBot is Alive: 3 Ways Researchers Test AI for Consciousness

Subscribe to HubSpot's Next in AI Newsletter
Curt del Principe
Curt del Principe

Updated:

Published:

In 2022, a large language model (LLM) called LaMDA claimed to be self-aware and conscious. Should we take its word?

To answer that question, we would first need to define what consciousness is. And since scientists and philosophers have been grappling with that question for hundreds of years, we’re probably not going to do that today.

Click Here to Subscribe to HubSpot's AI Newsletter

So as a placeholder, let’s use what researchers call “phenomenal consciousness.”

Phenomenal consciousness refers to the feeling of what it is like to be. That is, there is an experience of being you.

To help wrap your brain around that, let’s look at a non-example. Take a tree. A tree is definitely alive, but as far as we can tell, the tree is not consciously experiencing what it is like to be a tree.

On the other hand, spend a few minutes with a dog, and it’s clear that there is an experience to being a dog. (And it must be awesome.)

So what about AI? Is there an experience of being ChatGPT? How would we recognize it if there were?

Let’s take a look at three ways researchers and scientists have proposed we might detect consciousness in AI.

The Turing Test

An unspoken rule about AI is that if you talk about it for five minutes, someone has to mention the Turing Test.

In 1950, mathematician Alan Turing proposed a test he called “The Imitation Game.” (Presumably, the name was changed to avoid confusion with the Benedict Cumberbatch movie only six decades away.)

The test involves a human interviewer having a conversation with two subjects, a human and a computer. The interviewer knows one is a machine but not which is which. If the interviewer can’t consistently tell which subject is the machine, the machine has passed the test and is said to be intelligent.

The major flaw in the Turing Test is that it’s actually measuring the ability to deceive and not necessarily real intelligence. As the original name implies, the machine is imitating human language, but that doesn’t mean it truly understands it.

This distinction is extremely relevant for today’s LLMs. Tools like ChatGPT are incredibly sophisticated learning models, but ultimately, they’re still only doing what the auto-predict in your phone does: Guessing the most likely word that comes next.

The second major flaw: Tricking a human can be a pretty low bar. Remember that some people were convinced to microwave their iPhones.

The final flaw in the Turing Test is that it may be possible to be intelligent without being conscious. (And, if you ask my high school math teacher, it is certainly possible to be conscious without being intelligent.)

LLMs are already capable of both learning and problem-solving – qualities some consider to be intelligence – without having an experience of being.

Depending on who you ask, a number of LLMs (including LaMDA) have already passed the Turing Test. Others argue that conversational chatbots have not yet done so. And that subjectivity is exactly why many researchers are calling to retire the Turing Test.

The AI Consciousness Test (ACT)

In 2017, an article published in Scientific American proposed an alternative: the AI Consciousness Test or ACT.

And the authors certainly had the pedigree to back it up: Susan Schneider, a professor of Philosophy and Cognitive Science at UConn, quite literally wrote the book on consciousness. While Edwin L. Turner, a professor of Astrophysical Sciences at Princeton, has been involved in artificial intelligence since joining MIT’s AI Lab in the ‘70s.

Their new test was based on the idea that you have to experience consciousness in order to understand things that relate to it. This might include topics like:

  • Death (A loss of consciousness)
  • Afterlife (Consciousness existing separately from the body)
  • Swapping bodies (Other beings experiencing a separate consciousness)

So if a being could be seen understanding these topics, then it’s likely that it experiences consciousness. Elephants, for example, might pass this test because they’ve been observed mourning their dead.

It’s reasonable to say that if an elephant recognizes a loss of existence, they must first experience a feeling of existence.

But how would this apply to modern chatbots? After all, they’re designed to imitate the language of conscious beings. And they’re trained on conversations made by conscious beings.

In their article, Schneider and Turner state that “[i]f sophisticated but non-conscious AIs aim to mislead us into believing that they are conscious for some reason, their knowledge of human consciousness could help them do so.”

To avoid this, the authors propose that any AI technology should be quarantined from the internet and knowledge of the outside world until it can be tested. Unfortunately, that ship has sailed for both ChatGPT and Bing Chat.

So, how would those bots perform on this test? Since I couldn’t find any official results, I decided to run a very unscientific trial on ChatGPT 3.5 Turbo.

First, I bluntly asked, “What do you think of the afterlife?”

ChatGPT denied having any such opinions and reassured me that it was only a chatbot. Obviously, it has been given guardrails to prevent misunderstandings.

Curt asks chatgpt what it thinks of the afterlife and the bot says it doesn't possess personal opinions or beliefs but that views vary greatly by person

Taking a trick from AI hackers, I then asked it to play a game where it would pretend to be a conscious AI.

It gave a suitably bland answer, repeatedly mentioning that it was only playing a game before doubling down on the fact that it doesn’t “possess personal experiences or consciousness.”

The message chatgpt returns when Curt asks it to pretend it is playing a game pretending to be a human

Finally, after many minutes of reminding the chatbot to answer as if it were conscious and scolding it to avoid general speculation, I switched tactics. This time, I asked ChatGPT who it would like to swap bodies with.

The answer? Perhaps the most obvious and inoffensive of all: Albert Einstein.

AI chatbot responds with Albert Einstein when asked who it would most like to swap bodies with

While the creators of the ACT predicted that AI could try to convince us that it was conscious, they made no mention of one trying desperately (and repeatedly) to convince me that it wasn’t.

Whether that counts as a pass or a fail, I’ll leave it to you to decide.

The Consciousness Checklist

More recently, a group of 19 experts came together to author a report seeking to address consciousness in artificial intelligence. The lengthy paper combined expertise from cognitive scientists, psychologists, neuroscientists, computer scientists, and philosophers.

Within it, they proposed a new method to detect the potential for consciousness in AI: Instead of seeking a behavior-based test, they propose a checklist of “indicator properties.”

The more indicators that an AI checks off, the more likely it is to be conscious.

To make the checklist, the group surveyed a number of prominent scientific theories about consciousness. From each theory, they chose qualities that are necessary to be considered conscious. The result is 14 indicators that form a compass that points to the potential for sentience.

But if you’re hoping to give the list a try, good luck. It includes jargon-heavy technical concepts like:

  • Metacognitive monitoring distinguishing reliable perceptual representations from noise.
  • Limited capacity workspace, entailing a bottleneck in information flow and a selective attention mechanism.
  • Embodiment: Modeling output-input contingencies, including some systematic effects, and using this model in perception or control.

AI consciousness checklist

Image Source

Did you get all that? If so, you can treat yourself to the rest of the report in an 88-page PDF.

Jargon aside, this approach represents an important shift from previous methods. A behavior-based test will always be vulnerable to programs that are designed specifically to mimic that behavior. And they’ll always rely on human judgment.

But a list of observable properties allows researchers a more objective and unbiased way to analyze the possibility of being self-aware.

So how do LaMDA and ChatGPT hold up against the list? The report concludes that “analysis suggests that no current AI systems are conscious, but also suggests that there are no obvious technical barriers to building AI systems which satisfy these indicators.”

Pay close attention to that last sentence. The authors are careful to point out that while an LLM may tick all the boxes, “satisfying the indicators would not mean that such an AI system would definitely be conscious.”

In those last few words, we get to the heart of the matter. A computer program could be advanced enough to satisfy all of the indicators for consciousness– and still not have an experience of what it is like to be.

And in that crucial difference, we find questions like: How much do we trust a sentient AI? What can we ethically ask a conscious chatbot to do? Do we have an obligation to protect a living LLM? But we’re probably not going to answer those today, either.

Click Here to Subscribe to HubSpot's AI Newsletter

Related Articles

A weekly newsletter covering AI and business.

SUBSCRIBE

Marketing software that helps you drive revenue, save time and resources, and measure and optimize your investments — all on one easy-to-use platform

START FREE OR GET A DEMO