More than 700 scientists, political figures, entrepreneurs, celebrities, as well as Steve Bannon and Glenn Beck — media personalities from the American right — have today called for a stop to efforts aimed at developing an artificial intelligence (AI) capable of surpassing human abilities, in order to prevent the potential risks such a development could pose to humanity.
“We call for a halt to the development of superintelligence until there is scientific consensus that it can be constructed in a controlled and safe manner, and until there is public support for such an effort,” states the page of the initiative, launched by the Future of Life Institute, a U.S.-based nonprofit organization that regularly warns about the harmful consequences of artificial intelligence.
Among the signatories are “fathers” of modern AI, such as Geoffrey Hinton, 2024 Nobel laureate in Physics; Stuart Russell, computer science professor at the University of California, Berkeley; and Yoshua Bengio, professor at the University of Montreal.
The list also includes technology industry figures such as Richard Branson, founder of the Virgin Group, and Steve Wozniak, co-founder of Apple; political figures such as Steve Bannon, former advisor to U.S. President Donald Trump, and Susan Rice, national security advisor under Barack Obama; religious officials such as Paolo Benanti, papal advisor and Vatican expert on AI; as well as celebrities like American singer will.i.am, and even Prince Harry and his wife Meghan Markle.
Support from figures like Bannon reflects a potentially growing concern about AI among the populist right, Reuters notes, at a time when many with ties to Silicon Valley have assumed significant roles in the Republican government of U.S. President Donald Trump.
Steve Bannon and Glenn Beck did not immediately respond to requests for comment.
Most major AI developers are aiming to create artificial general intelligence (AGI), a stage at which AI would equal all human cognitive abilities, and eventually superintelligence, which would go beyond those capabilities.
For Sam Altman, head of OpenAI — creator of ChatGPT — superintelligence could be achieved within five years, as he explained in September at an event hosted by the media group Axel Springer.
“It matters little whether it happens in two or fifteen years; building something like this is unacceptable,” Max Tegmark, president of the Future of Life Institute, told Agence France-Presse, adding that companies should not engage in such work “without a regulatory framework in place.”
“We can support the creation of more powerful AI tools — for example, to cure cancer — while simultaneously opposing superintelligence,” he added.
This action echoes a letter from AI researchers and industry leaders published a month ago during the United Nations General Assembly, which called for the establishment of “international agreements on red lines for artificial intelligence” to prevent consequences that could be catastrophic for humanity.
Ask me anything
Explore related questions