From 604da9ae1117b7d66c632c10e73df8413a845f20 Mon Sep 17 00:00:00 2001
From: Maxime MORGE <maxime.morge@univ-lille.fr>
Date: Mon, 24 Feb 2025 12:02:01 +0100
Subject: [PATCH] LLM4AAMAS: add mei24pnas

---
 README.md | 15 +++++++++++++++
 1 file changed, 15 insertions(+)

diff --git a/README.md b/README.md
index 04e9550..89fefba 100644
--- a/README.md
+++ b/README.md
@@ -398,6 +398,21 @@ A simulation of the propagation processes in a social network.
   Lu, Jinzhu Mao, Jinghua Piao, Huandong Wang, Depeng Jin, Yong Li (2023)*
   Published on *arXiv* arXiv:2307.14984
 
+When LLM-based agents participate in various games designed to assess different
+traits—such as the dictator game (altruism), the ultimatum game (fairness), the
+trust game (trust, fairness, altruism, and reciprocity), the bomb risk game
+(risk aversion), the public goods game (free-riding, altruism, and cooperation),
+and the iterated prisoner’s dilemma (cooperation, reciprocity, and strategic
+reasoning)—their behaviors generally resemble those of humans. When deviations
+occur, chatbots tend to be more cooperative and altruistic, displaying higher
+levels of trust, generosity, and reciprocity. They behave as if they prioritize
+maximizing the total payoff of both players rather than solely their own gain.
+
+- **[A Turing test of whether AI chatbots are behaviorally similar to humans](https://doi.org/10.1073/pnas.2313925121)** *Qiaozhu Mei, Yutong Xie, Walter Yuan, Matthew O. Jackson (2024)*  
+  Published in *Proceedings of the National Academy of Sciences*, **121**(9), e2313925121.  
+  DOI: [10.1073/pnas.2313925121](https://doi.org/10.1073/pnas.2313925121)
+
+
 A study of LLMS as artificial social agents playing the iterated prisoner's
 dilemma which shows that prompt comprehension, memory representation, and
 duration of the simulation play crucial roles. LLMs are characterized by an
-- 
GitLab