From 7216871c05c558dea4e720dac60624ed96fe656b Mon Sep 17 00:00:00 2001 From: Maxime MORGE <maxime.morge@univ-lille.fr> Date: Fri, 28 Mar 2025 11:17:23 +0100 Subject: [PATCH] LLM4AAMAS: Add johnson23arxiv --- README.md | 40 +++++++++++++++++++++++++--------------- 1 file changed, 25 insertions(+), 15 deletions(-) diff --git a/README.md b/README.md index a281e45..983dc2a 100644 --- a/README.md +++ b/README.md @@ -342,6 +342,15 @@ simulation. ### Distributed Problem Solving +AGENTVERSE is a general multi-agent framework that simulates problem-solving +procedures of human groups. + +- **[Agentverse: Facilitating multi-agent collaboration and exploring emergent + behaviors](https://openreview.net/forum?id=HywBMyh6JGR)** *Weize Chen, Yusheng + Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, + Yi-Hsin Hung, Chen Qian, et al. (2023)* Published in *The Twelfth + International Conference on Learning Representations (ICLR 2023)* + The authors consider LLMs as multi-robot task planners. They compares different coordination frameworks of cooperative dialogue among multiple LLMs for increasing number of robots. While a decentralized communication framework @@ -356,16 +365,6 @@ produce the most successful plan and scale best to large numbers of agents. *2024 IEEE International Conference on Robotics and Automation (ICRA)*, pp. 4311-4317. -AGENTVERSE is a general multi-agent framework that simulates problem-solving -procedures of human groups. - -- **[Agentverse: Facilitating multi-agent collaboration and exploring emergent - behaviors](https://openreview.net/forum?id=HywBMyh6JGR)** *Weize Chen, Yusheng - Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, - Yi-Hsin Hung, Chen Qian, et al. (2023)* Published in *The Twelfth - International Conference on Learning Representations (ICLR 2023)* - - ### Social Simulation LLMs can simulate realistic perceptions, reasoning, and decision-making, react @@ -460,9 +459,8 @@ simple patterns, and may modify refined belief when taking actions. - **[Can Large Language Models Serve as Rational Players in Game Theory? A Systematic Analysis](https://ojs.aaai.org/index.php/AAAI/article/view/29751)** Caoyun Fan, Jindou Chen, Yaohui Jin, and Hao He (2024) *Presented at AAAI*, - **38**(16), 17960-17967. [DOI: - 10.1609/aaai.v38i16.29751](https://doi.org/10.1609/aaai.v38i16.29751) - + **38**(16), 17960-17967. + When LLM-based agents participate in various games designed to assess different traits—such as the dictator game (altruism), the ultimatum game (fairness), the trust game (trust, fairness, altruism, and reciprocity), the bomb risk game @@ -516,9 +514,21 @@ selecting its own. By prompting LLMs to imagine possible actions and their outcomes before making a decision, the authors improve GPT-4’s behavior, leading it to alternate more effectively. -- **[Playing Repeated Games with Large Language Models](https://arxiv.org/abs/2305.16867)** - Elif Akata, Lion Schulz, Julian Coda-Forno, Seong Joon Oh, Matthias Bethge, Eric Schulz (2023) Published on arXiv +- **[Playing Repeated Games with Large Language + Models](https://arxiv.org/abs/2305.16867)** Elif Akata, Lion Schulz, Julian + Coda-Forno, Seong Joon Oh, Matthias Bethge, Eric Schulz (2023) Published on + arXiv + + +The authors report experiments testing for altruistic behaviors among AI agents. +Only the most sophisticated AI agent exhibits the most generous altruistic +behavior in the dictator game, resembling human rates of sharing with other +humans. However, the AI agent shared substantially less of its endowment with +the human experimenter or an anonymous charity than with other AI agents. +- **[Evidence of behavior consistent with self-interest and altruism in an + artificially intelligent agent](https://arxiv.org/abs/2301.02330)** + Tim Johnson, Nick Obradovich (2023) on *arXiv*. ### Generative MAS on the shelf -- GitLab