The computer scientist regarded as the “godfather of artificial intelligence” says the government will have to establish a universal basic income to address AI’s impact on inequality.

Professor Geoffrey Hinton told BBC Newsnight that a benefits reform giving fixed amounts of cash to every citizen would be needed because he was “apprehensive about AI taking lots of mundane jobs”.

“I was consulted by people in Downing Street, and I advised them that universal basic income was a good idea,” he said.


Advertisement


He said while he felt AI would increase productivity and wealth, the money would go to the rich “and not the people whose jobs get lost and that’s going to be very bad for society”.

Professor Hinton is the pioneer of neural networks, which form the theoretical basis of the current explosion in artificial intelligence.

He worked at Google until last year, but he left the tech giant to speak more freely about the dangers of unregulated AI.

The concept of a universal basic income amounts to the government paying everyone a set salary regardless of their means.

Critics say it would be highly costly and divert funding away from public services while not necessarily helping to alleviate poverty.

A government spokesman said there were “no plans to introduce a universal basic income”.

Professor Hinton reiterated his concern that human extinction-level threats were emerging.

Developments over the last year showed governments were unwilling to rein in military use of AI, he said, while the competition to develop products rapidly meant there was a risk tech companies wouldn’t “put enough effort into safety”.

Professor Hinton said, “My guess is that between five and 20 years from now, there’s a probability of half that we’ll have to confront the problem of AI trying to take over.”

This would lead to an “extinction-level threat” for humans because we could have “created a form of intelligence that is just better than biological intelligence… That’s very worrying for us”.

AI could “evolve”, he said, “to get the motivation to make more of itself” and could autonomously “develop a sub-goal of getting control”.

He said there was already evidence of large language models – a type of AI algorithm used to generate text – choosing to be deceptive.

He said recent AI applications to generate thousands of military targets were the “thin end of the wedge”.

Author

  • End Time Headlines

    End Time Headlines is a Ministry that provides News and Headlines from a "Prophetic Perspective" as well as weekly podcasts to inform and equip believers of the Signs and Seasons that we are living in today.

    View all posts