Skip to content
Snippets Groups Projects
Commit 604da9ae authored by Maxime MORGE's avatar Maxime MORGE
Browse files

LLM4AAMAS: add mei24pnas

parent 76a1edbd
No related branches found
No related tags found
No related merge requests found
......@@ -398,6 +398,21 @@ A simulation of the propagation processes in a social network.
Lu, Jinzhu Mao, Jinghua Piao, Huandong Wang, Depeng Jin, Yong Li (2023)*
Published on *arXiv* arXiv:2307.14984
When LLM-based agents participate in various games designed to assess different
traits—such as the dictator game (altruism), the ultimatum game (fairness), the
trust game (trust, fairness, altruism, and reciprocity), the bomb risk game
(risk aversion), the public goods game (free-riding, altruism, and cooperation),
and the iterated prisoner’s dilemma (cooperation, reciprocity, and strategic
reasoning)—their behaviors generally resemble those of humans. When deviations
occur, chatbots tend to be more cooperative and altruistic, displaying higher
levels of trust, generosity, and reciprocity. They behave as if they prioritize
maximizing the total payoff of both players rather than solely their own gain.
- **[A Turing test of whether AI chatbots are behaviorally similar to humans](https://doi.org/10.1073/pnas.2313925121)** *Qiaozhu Mei, Yutong Xie, Walter Yuan, Matthew O. Jackson (2024)*
Published in *Proceedings of the National Academy of Sciences*, **121**(9), e2313925121.
DOI: [10.1073/pnas.2313925121](https://doi.org/10.1073/pnas.2313925121)
A study of LLMS as artificial social agents playing the iterated prisoner's
dilemma which shows that prompt comprehension, memory representation, and
duration of the simulation play crucial roles. LLMs are characterized by an
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment