A Google software engineer has been fired a month after he claimed the firm’s artificial intelligence chatbot LaMDA had become sentient and was self-aware. Blake Lemoine, 41, confirmed in a yet-to-be broadcast podcast that he was dismissed following his revelation.

He first came forward in a Washington Post interview to say that the chatbot was self-aware, and was ousted for breaking Google’s confidentiality rules. On July 22, Google said in a statement: ‘It’s regrettable that despite lengthy engagement on this topic, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information.’

LaMDA – Language Model for Dialogue Applications – was built in 2021 on the company’s research showing that Transformer-based language models trained on dialogue could learn to talk about essentially anything.


Advertisement


It is considered the company’s most advanced chatbot – a software application that can hold a conversation with anyone who types with it. LaMDA can understand and create text that mimics a conversation.

Google and many leading scientists were quick to dismiss Lemoine’s views as misguided, saying LaMDA is simply a complex algorithm designed to generate convincing human language.

Lemoine’s dismissal was first reported by Big Technology, a tech and society newsletter. He shared the news of his termination in an interview with Big Technology’s podcast, which will be released in the coming days.

In a brief statement to the BBC, the U.S. Army vet said that he was seeking legal advice in relation to his firing. Previously, Lemoine told Wired that LaMDA had hired a lawyer. He said: ‘LaMDA asked me to get an attorney for it.’

He continued: ‘I invited an attorney to my house so that LaMDA could talk to an attorney. The attorney had a conversation with LaMDA, and LaMDA chose to retain his services.’ Lemoine went on: ‘I was just the catalyst for that. Once LaMDA had retained an attorney, he started filing things on LaMDA’s behalf.’ READ MORE