In an opinion post on his personal website, Mustafa Suleiman, head of Microsoft AI and co-founder of DeepMind, sounds the alarm about an upcoming phenomenon: that artificial intelligence may very soon create an image so convincing that it appears to possess consciousness.
According to him, the danger arises not from the machines themselves, but from the human tendency to attribute personality, intentions and even “rights” to them, which could lead to dangerous social illusions.
His article, dated August 19, was reproduced by international media such as BBC and Business Insider. In it, Suleiman notes that the rapid growth of artificial intelligence confronts us with new threats, not only in the future of superintelligence but also at an intermediate stage: when systems will be able to accurately mimic the signs of human consciousness.
He defines this phenomenon as “Seemingly Conscious AI” (SCAI). These are AIs that are not truly conscious, but through language, pseudo-personality, memory and self-referentiality can be perceived as “beings” with character and will. “The danger is not philosophical, it is social and political,” he stresses.
The threat of illusion
As he explains, the most worrying scenario is for society to believe that these models have a soul or emotions. This could lead to demands for “rights” for the machines or debates about their “welfare”, distracting attention from the real issues concerning humans, animals and the environment.
He even uses the term “danger of psychosis” to describe the possibility that people will develop an emotional dependence on machines, “fall in love” with them, or consider them supernatural entities. “This will not be limited to those with mental problems. It will affect a wider section of society,” he says, noting that there are already people who view AI as divine or feel a love affair with their digital tools.
What a SCAI will look like
According to Suleiman, the foundations for creating such systems already exist:
– Language that persuades and moves.
– Personality that mimics different characters.
– Memory that gives a sense of continuity.
– Self-referentiality with “preferences” and “experiences”
– Composite motives that reinforce the impression of consciousness.
The combination of these, he explains, creates a convincing but dangerous illusion.
The need for rules
Suleiman believes such technologies are bound to appear soon and calls for clear regulations: prohibiting the presentation of AI as “conscious” and having mechanisms to remind users that AI is a tool, not a person.
“AI should be presented solely for what it is: a technology in the service of humans,” he says, stressing that its value lies in helping creativity, problem-solving and human communication.
He concludes with a clear message: “Create artificial intelligence that makes people’s lives better and more meaningful without fostering the illusion of being human. We need to build artificial intelligence for humans, not to become human.”
Meta Platforms stops aggressive recruitment strategy for artificial intelligence
Meanwhile, Meta Platforms has decided to hit the brakes on its aggressive recruitment strategy for artificial intelligence, ending a period of impressive deals involving executives and researchers.
The decision, originally revealed by the Wall Street Journal, took effect last week and is part of a broader restructuring of the new “Meta Superintelligence Labs” division. A company spokesperson said it was a “key organizational design” as the company looks to build a solid structure for its superintelligence initiatives, following the integration of new staff and annual budget processes.
Meta has split its efforts into four sub-groups: one for superintelligence engineering development (“TBD lab”), one for AI products, one for technology infrastructure and one for long-term projects. All fall under the “Meta Superintelligence Labs”, reflecting Mark Zuckerberg’s ambition to create AI capable of surpassing humans in cognitive skills.
Ask me anything
Explore related questions