News-Detailansicht

Woman using chatbot on a smart phone

AI Meets Game Theory: How Language Models Perform in Human-Like Social Scenarios

AI HCA

Large language models (LLMs) – the advanced AI behind tools like ChatGPT – are increasingly integrated into daily life, assisting with tasks such as writing emails, answering questions, and even supporting healthcare decisions. But can these models collaborate with others in the same way humans do? Can they understand social situations, make compromises, or establish trust? A new study from researchers at Helmholtz Munich, the Max Planck Institute for Biological Cybernetics, and the University of Tübingen, reveals that while today’s AI is smart, it still has much to learn about social intelligence.

Playing Games to Understand AI Behavior

To find out how LLMs behave in social situations, researchers applied behavioral game theory – a method typically used to study how people cooperate, compete, and make decisions. The team had various AI models, including GPT-4, engage in a series of games designed to simulate social interactions and assess key factors such as fairness, trust, and cooperation.

The researchers discovered that GPT-4 excelled in games demanding logical reasoning – particularly when prioritizing its own interests. However, it struggled with tasks that required teamwork and coordination, often falling short in those areas.

“In some cases, the AI seemed almost too rational for its own good,” said Dr. Eric Schulz, lead author of the study. “It could spot a threat or a selfish move instantly and respond with retaliation, but it struggled to see the bigger picture of trust, cooperation, and compromise.”

Teaching AI to Think Socially

To encourage more socially aware behavior, the researchers implemented a straightforward approach: they prompted the AI to consider the other player’s perspective before making its own decision. This technique, called Social Chain-of-Thought (SCoT), resulted in significant improvements. With SCoT, the AI became more cooperative, more adaptable, and more effective at achieving mutually beneficial outcomes – even when interacting with real human players.

“Once we nudged the model to reason socially, it started acting in ways that felt much more human,” said Elif Akata, first author of the study. “And interestingly, human participants often couldn’t tell they were playing with an AI.”

Applications in Health and Patient Care

The implications of this study reach well beyond game theory. The findings lay the groundwork for developing more human-centered AI systems, particularly in healthcare settings where social cognition is essential. In areas like mental health, chronic disease management, and elderly care, effective support depends not only on accuracy and information delivery but also on the AI’s ability to build trust, interpret social cues, and foster cooperation. By modeling and refining these social dynamics, the study paves the way for more socially intelligent AI, with significant implications for health research and human-AI interaction.

“An AI that can encourage a patient to stay on their medication, support someone through anxiety, or guide a conversation about difficult choices,” said Elif Akata. “That’s where this kind of research is headed.”

 

Original Publication


Akata et al., 2025: Playing repeated games with Large Language Models. Nature Human Behaviour. DOI: https://doi.org/10.1038/s41562-025-02172-y

About the Researchers

Dr. Eric Schulz

Director of the Institute of Human-Centered AI at Helmholtz Munich and former Max Planck Research Group Leader at the Max Planck Institute for Biological Cybernetics in Tübingen

Elif Akata MSc.

Doctoral Candidate at the Institute of Human-Centered AI at Helmholtz Munich, alumna of the Max Planck Institute for Biological Cybernetics in Tübingen, and the University of Tübingen

Prof. Dr. Matthias Bethge

Professor and Group Leader at the University of Tübingen; Director at Tübingen AI Center

Related news

Digital brain interface

AI, New Research Findings, Computational Health, HCA,

AI That Thinks Like Us – and Could Help Explain How We Think

Researchers at Helmholtz Munich have developed an artificial intelligence model that can simulate human behavior with remarkable accuracy. The language model, called Centaur, was trained on more than ten million decisions from psychological…

An AI powered system automating remote patient monitoring by analyzing real time health data and vital signs, futuristic AI-driven healthcare platform

AI, Computational Health, HCA, ICB, IML,

How Foundation Models Are Shaping Biomedical Research

AI-powered foundation models like GPT have evolved from everyday tools for simple tasks to powerful systems capable of revolutionizing industries. Researchers at Helmholtz Munich are harnessing the potential of these models to drive advancements in…

Eric Schulz

Computational Health,

Teaching AI Psychological Skills for Better Diagnosis and Therapies

An Interview with Dr. Eric Schulz about Foundation Models and Machine Psychology.