PuSH - Publication Server of Helmholtz Zentrum München

Akata, E. ; Schulz, L.* ; Coda-Forno, J. ; Oh, S.J.* ; Bethge, M.* ; Schulz, E.

Playing repeated games with large language models.

Nat. Hum. Behav. 9, 1380–1390 (2025)
Publ. Version/Full Text DOI PMC
Open Access Hybrid
Creative Commons Lizenzvertrag
Large language models (LLMs) are increasingly used in applications where they interact with humans and other agents. We propose to use behavioural game theory to study LLMs' cooperation and coordination behaviour. Here we let different LLMs play finitely repeated 2 × 2 games with each other, with human-like strategies, and actual human players. Our results show that LLMs perform particularly well at self-interested games such as the iterated Prisoner's Dilemma family. However, they behave suboptimally in games that require coordination, such as the Battle of the Sexes. We verify that these behavioural signatures are stable across robustness checks. We also show how GPT-4's behaviour can be modulated by providing additional information about its opponent and by using a 'social chain-of-thought' strategy. This also leads to better scores and more successful coordination when interacting with human players. These results enrich our understanding of LLMs' social behaviour and pave the way for a behavioural game theory for machines.
Altmetric
Additional Metrics?
Edit extra informations Login
Publication type Article: Journal article
Document type Scientific Article
Keywords Evolution; Cooperation; Trust
ISSN (print) / ISBN 2397-3374
e-ISSN 2397-3374
Quellenangaben Volume: 9, Issue: 7, Pages: 1380–1390 Article Number: , Supplement: ,
Publisher Springer
Publishing Place Heidelberger Platz 3, Berlin, 14197, Germany
Reviewing status Peer reviewed
Grants Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy
German Federal Ministry of Education and Research (BMBF): Tuebingen AI Center,
Volkswagen Foundation
Max Planck Society