Elon Musk says he told senior leaders in China that the creation of an AI-led “digital superintelligence” could usurp the Chinese Communist Party and take control of the country.

During a Twitter Spaces call on Wednesday, the Twitter owner said he spent “a fair bit of time with senior leadership” in China following a trip in May. He said he discussed the potential dangers of AI and the potential of the technology to claim power.

“I think that did resonate,” Musk said during the call. “No government wants to find itself unseated by a digital superintelligence.”


Advertisement


The Spaces call on the “future of AI” came as the billionaire unveiled xAI, his new company that aims to rival OpenAI and other tech companies while setting out to “understand the true nature of the universe.”

“I think trying to understand the universe is going to be pro-humanity, from the standpoint that humanity is just much more interesting than not humanity,” he said during the call, which took place alongside Congressmen Ro Khanna and Mike Gallagher.

Musk has raised several alarms about the potential dangers of AI becoming a kind of “superintelligence” with some capabilities that humans have. His concerns have grown increasingly louder since the launch of OpenAI’s ChatGPT.

On the call, he said if he could “press pause” on the advancement of AI, he would, but that it didn’t seem realistic.

During his trip to China, Musk was treated like royalty, meeting senior government officials and business leaders to discuss popular topics such as AI.

Musk shared on the call that following his conversations with leaders in the country, there seemed to be interest in working on a “cooperative international framework” to regulate AI. He admitted there was a fair bit of distrust towards the US, however.

Meanwhile, Controversial AI theorist Eliezer Yudkowsky sits on the fringe of the industry’s most extreme circle of commentators, where extinction of the human species is the inevitable result of developing advanced artificial intelligence.

“I think we’re not ready, I think we don’t know what we’re doing, and I think we’re all going to die,” Mr Yudkowsky said. For the past two decades, Mr. Yudkowsky has consistently promoted his theory that hostile AI could spark a mass extinction event.

As many in the AI industry shrugged or raised eyebrows at this assessment, he created the Machine Intelligence Research Institute with funding from Peter Thiel, among others, and collaborated on written work with futurists such as Nick Bostrom.

To say that some of his visions for the end of the world are unpopular would be a gross understatement; they’re on par with the prophecy that the world would end in 2012.
That prediction was based on a questionable interpretation of an ancient text, as well as a dearth of supportive evidence.

While Mr. Yudkowsky’s views are draconian, concern over AI’s potential for harm has gained traction at the highest echelons of the AI community, including among chief executive officers of some of the leading companies in artificial intelligence, such as OpenAI, Anthropic, and Alphabet’s DeepMind.

The rapid rise of generative AI in just even the past eight months has prompted calls for regulation and a pause in training of advanced AI systems.

In May, Sam Altman, Demis Hassabis, and Dario Amodei joined hundreds of other leaders and researchers in co-signing a brief statement released by the nonprofit Center for AI Safety that said “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war”.

Microsoft co-founder Bill Gates was a signatory, as was Mr Yudkowsky. Meanwhile, Alphabet’s Google released its Bard chatbot to users in the EU and Brazil, and said the artificial intelligence tool can now generate responses in more than 40 languages, including Chinese, Hindi, and Spanish.