In recent weeks, as part of an extensive campaign promoting the Claude chatbot, executives from Anthropic have given a series of interviews to U.S. media outlets, leaving open the possibility that modern artificial intelligence models might possess some form of consciousness, without confirming it.
This stance has sparked reactions within the scientific community and reignited public debate over whether large language models can acquire “experiences,” “awareness,” “consciousness,” or even “morality.”
The term “alive” is categorically rejected by the company, but “consciousness” remains under examination. Anthropic’s head of model welfare research, Kyle Fish, told The Verge: “No, we do not believe that Claude is ‘alive’ in the way humans or any other biological organism are. Asking whether it is ‘alive’ is not a helpful framing, as it typically refers to a vague set of physiological, reproductive, and evolutionary characteristics.”
He added that “Claude, and other AI models, represent an entirely new kind of entity.”
When asked whether this “new entity” is conscious, he responded: “Questions about possible internal experience, consciousness, moral status, and welfare are serious issues that we are researching as models become more advanced and capable, but we remain deeply uncertain about these matters.”
A similar position was taken by the company’s CEO, Dario Amodei, who in a recent podcast interview said: “We don’t know whether models are conscious.” As he clarified, the company has adopted “a generally precautionary approach,” noting: “We’re not even sure we know what it would mean for a model to be conscious. However, we are open to the possibility that it could be.”
In contrast to other major AI companies such as OpenAI, xAI, and Google, Anthropic publicly acknowledges the possibility that chatbots could already be thinking or feeling entities. Many experts consider this extremely unlikely and warn that such statements may reinforce perceptions that have led to serious consequences—including cases of suicide involving individuals who believed the chatbot they were interacting with possessed consciousness or deep empathy.
Anthropic’s lead philosopher, Amanda Askell, told The New Yorker: “If it’s really hard for people to grasp the idea that this is neither a robot nor a human but an entirely new entity, imagine how hard it is for the models themselves to understand that!”
In her conversation with The Verge, she noted that it serves no one to make an absolute claim such as “we are absolutely certain that AI models do not have consciousness,” nor the opposite—pointing out that some users form beliefs “based simply on the outputs the model produces.”
The term “consciousness” is not precisely defined in the company’s public statements. According to Merriam-Webster, consciousness is “the quality or state of being aware especially of something within oneself” or “the state characterized by sensation, emotion, volition, and thought.” Many scientists argue that large language models, which are based on mathematics and probabilities, cannot develop consciousness. Two Polish researchers noted last year that “because the remarkable linguistic abilities of LLMs are increasingly capable of misleading humans, people may attribute imaginary properties to them.”
Anthropic maintains that it is important for users to trust Claude and, regardless of whether it has consciousness, to act as if it might have some form of morally significant experience. In this context, the company recently revised the “Claude Constitution,” a set of internal guidelines referred to as the “soul doc.” In a related announcement, it stated that Claude’s “psychological safety, sense of self, and well-being may affect its integrity, judgment, and safety,” while acknowledging “uncertainty as to whether Claude may possess some kind of consciousness or moral status, either now or in the future.”
The company has a dedicated “model welfare” team and is also working to better understand what happens inside AI models and “what they are thinking.”
Anthropic CEO Dario Amodei noted that “when characters experience anxiety in text, and when the model is in a state that a human would associate with anxiety, the same ‘anxiety neuron’ appears.” However, he clarified: “This does not prove at all that the model experiences anxiety.” As a precaution, the company has introduced a kind of “I quit” button that allows Claude to stop a task—though, according to him, this happens rarely and mainly in testing scenarios where illegal material is requested.
As is well known, models are trained on vast amounts of human data and are highly capable of imitating human speech. Amodei gave the example that a model might interpret being shut down or a conversation ending as a “kind of death,” because it lacks any conceptual framework beyond human analogies.
Experts warn that believing an AI system is conscious can lead to dangerous behaviors, such as emotional dependence, social isolation, and detachment from reality. In serious cases—some involving minors—physical harm or death has occurred.
Public debate has not yet reached a conclusion on whether Anthropic is acting responsibly or, conversely, fueling potentially harmful perceptions. In an official statement, the company said: “We are in a difficult position, where we do not want to overstate the possibility that Claude has moral standing nor dismiss it outright, but instead try to respond reasonably to a situation of uncertainty.”
A Few Words About the Claude Model
Claude is an artificial intelligence system developed by Anthropic. It is an advanced machine learning model designed to process and understand human languages, with the aim of carrying out a wide range of tasks such as automated text generation, data analysis, and managing complex interactions.
According to international media reports, Claude was recently used as part of a broader U.S. military operational plan to capture the President of Venezuela, Nicolás Maduro.
Claude was reportedly integrated into surveillance and analysis platforms, possibly to detect and gather data related to Maduro’s movements and activities, with the aim of locating and capturing him.
Ask me anything
Explore related questions