When LLM-based agents participate in various games designed to assess different
traits—such as the dictator game (altruism), the ultimatum game (fairness), the
trust game (trust, fairness, altruism, and reciprocity), the bomb risk game
(risk aversion), the public goods game (free-riding, altruism, and cooperation),
and the iterated prisoner’s dilemma (cooperation, reciprocity, and strategic
reasoning)—their behaviors generally resemble those of humans. When deviations
occur, chatbots tend to be more cooperative and altruistic, displaying higher
levels of trust, generosity, and reciprocity. They behave as if they prioritize
maximizing the total payoff of both players rather than solely their own gain.
-**[A Turing test of whether AI chatbots are behaviorally similar to humans](https://doi.org/10.1073/pnas.2313925121)***Qiaozhu Mei, Yutong Xie, Walter Yuan, Matthew O. Jackson (2024)*
Published in *Proceedings of the National Academy of Sciences*, **121**(9), e2313925121.