Can artificial intelligence become conscious? A Cambridge philosopher warns we may never know, urging caution as AI advances raise ethical, psychological and existential questions for humanity.

Artificial intelligence (AI) is no longer a futuristic concept—it’s already reshaping finance, medicine, and scientific research. But a growing debate has emerged among philosophers and scientists: could AI ever become conscious?

Add Asianet Newsable as a Preferred SourcegooglePreferred

Dr Tom McClelland, a philosopher at the University of Cambridge, warns that while the idea might sound like science fiction, the evidence is far from conclusive. In fact, he says, we may never truly know.

"The only sensible position on the question of whether AI is conscious is one of 'agnosticism'," Dr McClelland explained.

The Consciousness Conundrum

The heart of the problem, according to McClelland, is our limited understanding of consciousness itself.

"We don't have a 'deep explanation' of what makes something conscious in the first place, so we can't test for it in AI," he said. "The best–case scenario is we're an intellectual revolution away from any kind of viable consciousness test."

He adds that without a clear theory, neither common sense nor rigorous research can answer the question, leaving only one logical stance: agnosticism.

"We cannot, and may never, know."

AI, Perception, and Self-Awareness

Tech companies are pouring billions into developing artificial general intelligence (AGI)—AI that can outperform humans in any domain. Alongside these ambitious goals, some researchers speculate that AI might one day develop consciousness, gaining the ability to perceive and even become self-aware.

While the image of sentient robots may conjure fears of killer machines, McClelland warns that any leap toward consciousness could happen quietly.

"AI could make this jump without us even realising, because we don't really have an agreed–upon theory of consciousness to begin with," he said.

Current theories about consciousness are divided. Some suggest it arises from processing information in the right way—meaning AI could become conscious if it can run the right "software." Others insist consciousness is inherently biological, implying AI can at best simulate awareness.

"Until we can figure out which side of the argument is right, we simply don't have any basis on which to test for consciousness in AI," McClelland wrote in a paper published in Mind and Language. "Both sides of the debate are taking a 'leap of faith'."

Why Consciousness Matters

Understanding whether AI can be conscious is not just academic—it carries serious ethical implications. Humans are expected to behave morally toward other conscious beings, from people to animals, because consciousness grants moral status. Inanimate objects, like toasters or computers, do not command the same ethical consideration.

"It makes no sense to be concerned for a toaster's well–being because the toaster doesn't experience anything," McClelland explained. "So when I yell at my computer, I really don't need to feel guilty about it. But if we end up with AI that's conscious, then that could all change."

The Risk of Mistaken Consciousness

Yet another danger lies in the opposite error: treating non-conscious AI as if it were sentient. McClelland warns this could have psychological consequences for humans.

"If you have an emotional connection with something premised on it being conscious and it's not, that has the potential to be existentially toxic," he said.

He notes that some members of the public are already grappling with this illusion.

"People are sending me letters written by chatbots 'pleading with me that they're conscious'," he said.

The challenge, McClelland concludes, is balancing caution: we must avoid mistreating any conscious AI that may exist, while also not wasting resources on granting "rights" to something no more sentient than a toaster.

"We don't want to risk mistreating artificial beings that are conscious, but nor do we want to dedicate our resources to protecting the 'rights' of something no more conscious than a toaster."