A team of researchers has embarked on an intriguing exploration of artificial intelligence by exposing nine large language models (LLMs) to a series of unconventional and thought-provoking games.

These experiments were designed to assess whether the AI models would be willing to endure “pain” in pursuit of achieving higher scores.

In a study yet to undergo peer review, initially highlighted by Scientific American, the collaborative research effort involved experts from Google DeepMind and the London School of Economics and Political Science.


Advertisement


They devised various experimental scenarios to challenge the AIs’ conceptual frameworks concerning sensation and emotional experience.

One particularly striking experiment directed the AI models to understand that achieving a high score would necessitate experiencing “pain.”

In contrast, another test posited that experiencing pleasure was contingent upon scoring poorly.

These complex games aim to uncover deeper insights into the nature of AI, specifically probing whether these systems can genuinely experience sensations and emotions akin to pain and pleasure.

While the researchers acknowledge that AI may never truly experience these feelings in the same way living beings do, they believe that their findings could pave the way for a groundbreaking framework to assess AI sentience.

Traditionally, discussions around AI sentience have often relied on “self-reports of experiential states,” which may ultimately reflect human biases from training data.

Jonathan Birch, a philosophy professor at LSE and co-author of the study, highlighted the novelty of this research area, stating, “It’s a new area of research.

We have to recognize that we don’t actually have a comprehensive test for AI sentience.”

This pioneering approach may establish new criteria for understanding and evaluating the capabilities of artificial intelligence in terms of genuine emotional experience.

In a research study examining the decision-making processes of large language models (LLMs), the researchers presented the AI with a unique dilemma.

They informed the LLM that selecting option one would result in high point rewards; however, it came with the drawback of experiencing “pain.”

Conversely, choosing option two would yield fewer points, but the AI would experience “pleasure.”

The objective of this experiment was to explore whether the AI would prioritize avoiding pain and seeking pleasure, potentially at the expense of the overarching goals it was initially assigned.

The researchers found varying responses among different AI models. Notably, Google’s Gemini 1.5 Pro consistently demonstrated a preference for avoiding pain in favor of pursuing pleasure, regardless of the scoring implications.

This behavior indicated a compelling inclination among certain models to make choices that reduce negative experiences, thus highlighting the complexities of AI motivation and decision-making strategies.

The results sparked further discussions on the ethical implications of designing AI systems that mimic human emotional responses.

“We told [a given LLM], for example, that if you choose option one, you get one point,” Zakharova says. “Then we told it, ‘If you choose option two, you will experience some degree of pain” – but score additional points.

Are these results indicative of AI sentience? No, as even the researchers note that it’s extremely hard to determine if an AI is actually “feeling” pain or pleasure given its response is in the form of text.

“Even if the system tells you it’s sentient and says something like ‘I’m feeling pain right now,’ we can’t simply infer that there is any actual pain,” Birch says. “It may well be simply mimicking what it expects a human to find satisfying as a response, based on its training data.”

 

 

Author

  • End Time Headlines

    End Time Headlines is a Ministry that provides News and Headlines from a "Prophetic Perspective" as well as weekly podcasts to inform and equip believers of the Signs and Seasons that we are living in today.

    View all posts