AI simulations of dead people risk “unwanted digital hauntings”, researchers have warned.
A new study by ethicists at Cambridge University found that AI chatbots capable of simulating the personalities of people who have passed away – known as deadbots – should require safety protocols in order to protect surviving friends and relatives.
Some chatbot companies are already offering customers the option to simulate the language and personality traits of a deceased loved one using artificial intelligence.
Ethicists from Cambridge’s Leverhulme Centre for the Future of Intelligence say such ventures are “high risk” due to the psychological impact they can have on people.
“It is vital that digital afterlife services consider the rights and consent, not just of those they recreate, but those who will have to interact with the simulations,” said co-author Dr Tomasz Hollanek, from the Leverhulme Centre, said:
“These services run the risk of causing huge distress to people if they are subjected to unwanted digital hauntings from alarmingly accurate AI recreations of those they have lost. The potential psychological effect, particularly at an already difficult time, could be devastating.”
The findings were published in Philosophy and Technology in a study titled ‘Griefbots, Deadbots, Postmortem Avatars: on Responsible Applications of Generative AI in the Digital Afterlife Industry’.
The study details how AI chatbot companies that claim to be able to bring back the dead could use the technology to spam family and friends with messages and adverts using the deceased person’s digital likeness.
Such an outcome would be the equivalent of being “stalked by the dead,” the researchers warned.
“Rapid advancements in generative AI mean that nearly anyone with internet access and some basic know-how can revive a deceased loved one,” said study co-author Dr Katarzyna Nowaczyk-Basinska.
“This area of AI is an ethical minefield. It’s essential to prioritize the deceased’s dignity and ensure that this isn’t encroached on by financial motives of digital afterlife services, for example.
“At the same time, a person may leave an AI simulation as a farewell gift for loved ones who are not prepared to process their grief in this manner. The rights of both data donors and those who interact with AI afterlife services should be equally safeguarded.”
The study recommends safeguards for terminating deadbots and improved transparency in how the technology is used.