(OPINION) The computer scientist and CEO who popularized the term ‘artificial general intelligence’ (AGI) believes AI is verging on an exponential ‘intelligence explosion.’

The PhD mathematician and futurist Ben Goertzel made the prediction while closing out a summit on AGI this month: ‘It seems quite plausible we could get to human-level AGI within, let’s say, the next three to eight years.’

‘Once you get to human-level AGI,’ Goertzel, sometimes called ‘father of AGI,’ added, ‘within a few years you could get a radically superhuman AGI.’


Advertisement


While the futurist admitted that he ‘could be wrong,’ he went on to predict that the only impediment to a runaway, ultra-advanced AI — far more advanced than its human makers — would be if the bot’s ‘own conservatism’ advised caution.

Goertzel made his predictions during his closing remarks last week at the ‘2024 Beneficial AI Summit and Unconference,’ partially sponsored by his own firm SingularityNET where he is CEO.

‘There are known unknowns and probably unknown unknowns,’ Goertzel acknowledged during his talk at the event, held this year in Panama City, Panama.

‘No one has created human-level artificial general intelligence [AGI] yet; nobody has a solid knowledge of when we’re going to get there.’

But, unless the processing power, in Goertzel’s words, required ‘quantum computer with a million qubits or something,’ an exponential escalation of AI struck him as inevitable.

‘My own view is once you get to human-level AGI, within a few years you could get a radically superhuman AGI,’ he said.

In recent years, Goertzel has been investigating a concept he calls ‘artificial super intelligence’ (ASI) — which he defines as an AI that’s so advanced that it matches all of the brain power and computing power of human civilization.

Goertzel listed ‘three lines of converging evidence’ that, he said, support his thesis.

First, he cited the updated work of Google’s long-time resident futurist and computer scientist Ray Kurzweil, who has developed a predictive model suggesting AGI will be achievable in 2029.

Kurzweil’s idea, which will be given fresh detail in his forthcoming book ‘The Singularity is Nearer,’ drew on data documenting the exponential nature of technological growth within other tech sectors to help inform his analysis.

Next, Goertzel cited all the well-known recent improvements made to so-called large language models (LLMs) within the past few years, which he pointed out have ‘woken up so much of the world to the potential of AI.’

Lastly, the computer scientist, donning his signature leopard print hat, turned to his own infrastructure research designed to combine various types of AI infrastructure, which he calls ‘OpenCog Hyperon.’

The new infrastructure would marry more matural AI, like LLMs and new forms of AI that might be focused on other areas of cognitive reasoning beyond language, be it math, or physics or philosophy, to help create a more well-rounded true AGI.

Goertzel’s ‘OpenCog Hyperon.’ has gotten the backing and interest of others in the AI space, including Berkeley Artificial Intelligence Research (BAIR) which hosted an article he cowrote with Databricks CTO Matei Zaharia and others last month.

This is not the first potentially dire or unquestionably bold prediction on AI that Goertzel has made in recent years.