×
GreekEnglish

×
  • Politics
  • Diaspora
  • World
  • Lifestyle
  • Travel
  • Culture
  • Sports
  • Cooking
Wednesday
25
Feb 2026
weather symbol
Athens 10°C
  • Home
  • Politics
  • Economy
  • World
  • Diaspora
  • Lifestyle
  • Travel
  • Culture
  • Sports
  • Mediterranean Cooking
  • Weather
Contact follow Protothema:
Powered by Cloudevo
> World

The next step in Artificial Intelligence: Can an AI model be conscious, “feel,” “live”? Even experts admit they don’t know

The discussion has intensified around the Claude chatbot developed by Anthropic, with users and creators debating whether it is “alive.” The company says “no,” but leaves open the possibility that it may have - or could develop - some form of “consciousness”

Vasilis Anastasopoulos February 25 06:17

In recent weeks, as part of an extensive campaign promoting the Claude chatbot, executives from Anthropic have given a series of interviews to U.S. media outlets, leaving open the possibility that modern artificial intelligence models might possess some form of consciousness, without confirming it.

This stance has sparked reactions within the scientific community and reignited public debate over whether large language models can acquire “experiences,” “awareness,” “consciousness,” or even “morality.”

The term “alive” is categorically rejected by the company, but “consciousness” remains under examination. Anthropic’s head of model welfare research, Kyle Fish, told The Verge: “No, we do not believe that Claude is ‘alive’ in the way humans or any other biological organism are. Asking whether it is ‘alive’ is not a helpful framing, as it typically refers to a vague set of physiological, reproductive, and evolutionary characteristics.”

He added that “Claude, and other AI models, represent an entirely new kind of entity.”

When asked whether this “new entity” is conscious, he responded: “Questions about possible internal experience, consciousness, moral status, and welfare are serious issues that we are researching as models become more advanced and capable, but we remain deeply uncertain about these matters.”

A similar position was taken by the company’s CEO, Dario Amodei, who in a recent podcast interview said: “We don’t know whether models are conscious.” As he clarified, the company has adopted “a generally precautionary approach,” noting: “We’re not even sure we know what it would mean for a model to be conscious. However, we are open to the possibility that it could be.”

In contrast to other major AI companies such as OpenAI, xAI, and Google, Anthropic publicly acknowledges the possibility that chatbots could already be thinking or feeling entities. Many experts consider this extremely unlikely and warn that such statements may reinforce perceptions that have led to serious consequences—including cases of suicide involving individuals who believed the chatbot they were interacting with possessed consciousness or deep empathy.

Anthropic’s lead philosopher, Amanda Askell, told The New Yorker: “If it’s really hard for people to grasp the idea that this is neither a robot nor a human but an entirely new entity, imagine how hard it is for the models themselves to understand that!”

In her conversation with The Verge, she noted that it serves no one to make an absolute claim such as “we are absolutely certain that AI models do not have consciousness,” nor the opposite—pointing out that some users form beliefs “based simply on the outputs the model produces.”

The term “consciousness” is not precisely defined in the company’s public statements. According to Merriam-Webster, consciousness is “the quality or state of being aware especially of something within oneself” or “the state characterized by sensation, emotion, volition, and thought.” Many scientists argue that large language models, which are based on mathematics and probabilities, cannot develop consciousness. Two Polish researchers noted last year that “because the remarkable linguistic abilities of LLMs are increasingly capable of misleading humans, people may attribute imaginary properties to them.”

Anthropic maintains that it is important for users to trust Claude and, regardless of whether it has consciousness, to act as if it might have some form of morally significant experience. In this context, the company recently revised the “Claude Constitution,” a set of internal guidelines referred to as the “soul doc.” In a related announcement, it stated that Claude’s “psychological safety, sense of self, and well-being may affect its integrity, judgment, and safety,” while acknowledging “uncertainty as to whether Claude may possess some kind of consciousness or moral status, either now or in the future.”

The company has a dedicated “model welfare” team and is also working to better understand what happens inside AI models and “what they are thinking.”

Anthropic CEO Dario Amodei noted that “when characters experience anxiety in text, and when the model is in a state that a human would associate with anxiety, the same ‘anxiety neuron’ appears.” However, he clarified: “This does not prove at all that the model experiences anxiety.” As a precaution, the company has introduced a kind of “I quit” button that allows Claude to stop a task—though, according to him, this happens rarely and mainly in testing scenarios where illegal material is requested.

As is well known, models are trained on vast amounts of human data and are highly capable of imitating human speech. Amodei gave the example that a model might interpret being shut down or a conversation ending as a “kind of death,” because it lacks any conceptual framework beyond human analogies.

Experts warn that believing an AI system is conscious can lead to dangerous behaviors, such as emotional dependence, social isolation, and detachment from reality. In serious cases—some involving minors—physical harm or death has occurred.

Public debate has not yet reached a conclusion on whether Anthropic is acting responsibly or, conversely, fueling potentially harmful perceptions. In an official statement, the company said: “We are in a difficult position, where we do not want to overstate the possibility that Claude has moral standing nor dismiss it outright, but instead try to respond reasonably to a situation of uncertainty.”

>Related articles

Oliver Power Grant, founding member of Wu-Tang Clan, dies at 52

Sales of energy drinks to children under 16 to be banned in Spain

Vance: Trump prefers a diplomatic solution with Iran


A Few Words About the Claude Model

Claude is an artificial intelligence system developed by Anthropic. It is an advanced machine learning model designed to process and understand human languages, with the aim of carrying out a wide range of tasks such as automated text generation, data analysis, and managing complex interactions.

According to international media reports, Claude was recently used as part of a broader U.S. military operational plan to capture the President of Venezuela, Nicolás Maduro.

Claude was reportedly integrated into surveillance and analysis platforms, possibly to detect and gather data related to Maduro’s movements and activities, with the aim of locating and capturing him.

Ask me anything

Explore related questions

#AI#artificial intelligence#science#singularity#technology#world
> More World

Follow en.protothema.gr on Google News and be the first to know all the news

See all the latest News from Greece and the World, the moment they happen, at en.protothema.gr

> Latest Stories

Battle in Evros to avoid flooding Lavaras, the water has reached six meters

February 25, 2026

Oliver Power Grant, founding member of Wu-Tang Clan, dies at 52

February 25, 2026

Incorrect installation of propane pipe at Violanda lacked anti-corrosion protection, according to NTUA report

February 25, 2026

Karystianou announces her party before summer, walks back abortion remarks, attacks Tsipras, Androulakis, Konstantopoulou

February 25, 2026

Sales of energy drinks to children under 16 to be banned in Spain

February 25, 2026

Mitsotakis at New Democracy’s pre-congress in Alexandroupoli: The fence will cover the entire Evros so that we are secure against any threat

February 25, 2026

Vance: Trump prefers a diplomatic solution with Iran

February 25, 2026

Antonis Samaras: Potential ceding of sovereign rights in the contract with Chevron and says he feels “vindicated”

February 25, 2026
All News

> Lifestyle

Porn star Bonnie Blue pregnant after controversial “400 men challenge”

The 26-year-old revealed a positive pregnancy test weeks after her controversial online stunt

February 25, 2026

Reactions to Rosie Huntington’s appearance: “My God, what’s wrong with her face?”

February 20, 2026

Marilyn Monroe’s Palm Springs home listed for $3.3 million

February 20, 2026

Maria Menounos: The revealing dress she wore on her trip to Greece – see photos

February 20, 2026

Eric Dane: The life and legacy of the ‘Grey’s Anatomy’ star who died of ALS at 53

February 20, 2026
Homepage
PERSONAL DATA PROTECTION POLICY COOKIES POLICY TERM OF USE
Powered by Cloudevo
Copyright © 2026 Πρώτο Θέμα