(OPINION) According to a survey conducted by Stanford University’s Institute for Human-Centered AI, 36 percent of researchers believe that AI could cause a “nuclear-level catastrophe.” Great.

According to more details from “The Byte“, The survey was conducted as part of the institute’s annual AI index report, which is essentially the industry’s state of the union. While the report does have some high notes — the document notes that “policymaker interest in AI is on the rise,” with the tech pushing scientific discovery forward — that 36 percent figure is a difficult number to ignore.

If it makes anyone feel better, a user recently did try to get an autonomous AI system dubbed ChaosGPT to “destroy humanity,” but it didn’t get very far at all.


Advertisement


That 36 percent figure does come with an important caveat. It only refers to AI decision-making — as in, an AI making a choice on its own that ultimately causes a catastrophe — and not human misuse of AI, a growing threat that the report addressed separately later on.

“According to the AIAAIC database… the number of AI incidents and controversies has increased 26 times since 2012,” reads the report. “Some notable incidents in 2022 included a deepfake video of Ukrainian President Volodymyr Zelenskyy surrendering and US prisons using call-monitoring technology on their inmates.”

“This growth is evidence, of both greater use of AI technologies and awareness of misuse possibilities,” the researchers wrote. In other words: there are other ways that AI can and is causing harm, if not by its proverbial own hand. Despite these concerns, only 41 percent of natural language processing (NLP) researchers thought that AI should be regulated, according to the report.