diff --git a/README.md b/README.md
index 04e9550d12a842f5975d39f95a42faf050b0f5e2..89fefba8aceb1d983c5c18742db3ee5733ee9665 100644
--- a/README.md
+++ b/README.md
@@ -398,6 +398,21 @@ A simulation of the propagation processes in a social network.
   Lu, Jinzhu Mao, Jinghua Piao, Huandong Wang, Depeng Jin, Yong Li (2023)*
   Published on *arXiv* arXiv:2307.14984
 
+When LLM-based agents participate in various games designed to assess different
+traits—such as the dictator game (altruism), the ultimatum game (fairness), the
+trust game (trust, fairness, altruism, and reciprocity), the bomb risk game
+(risk aversion), the public goods game (free-riding, altruism, and cooperation),
+and the iterated prisoner’s dilemma (cooperation, reciprocity, and strategic
+reasoning)—their behaviors generally resemble those of humans. When deviations
+occur, chatbots tend to be more cooperative and altruistic, displaying higher
+levels of trust, generosity, and reciprocity. They behave as if they prioritize
+maximizing the total payoff of both players rather than solely their own gain.
+
+- **[A Turing test of whether AI chatbots are behaviorally similar to humans](https://doi.org/10.1073/pnas.2313925121)** *Qiaozhu Mei, Yutong Xie, Walter Yuan, Matthew O. Jackson (2024)*  
+  Published in *Proceedings of the National Academy of Sciences*, **121**(9), e2313925121.  
+  DOI: [10.1073/pnas.2313925121](https://doi.org/10.1073/pnas.2313925121)
+
+
 A study of LLMS as artificial social agents playing the iterated prisoner's
 dilemma which shows that prompt comprehension, memory representation, and
 duration of the simulation play crucial roles. LLMs are characterized by an