From 11a856510b61e7a179b6c03d8f1001150159841a Mon Sep 17 00:00:00 2001
From: mmorge <maxime.morge@univ-lyon1.fr>
Date: Fri, 9 May 2025 08:38:10 +0200
Subject: [PATCH] LLM4AAMAS: Add li23etal

---
 README.md | 21 +++++++++++++++++++++
 1 file changed, 21 insertions(+)

diff --git a/README.md b/README.md
index 5d24540..00ace28 100644
--- a/README.md
+++ b/README.md
@@ -394,6 +394,27 @@ produce the most successful plan and scale best to large numbers of agents.
   *2024 IEEE International Conference on Robotics and Automation (ICRA)*, pp.
   4311-4317.
 
+This study investigates the performance of LLM-based agents in a cooperative
+multi-agent text game involving Theory of Mind (ToM) inference tasks, and
+compares them with Multi-Agent Reinforcement Learning (MARL). The study
+introduces a prompt-engineering approach designed to mitigate common failures
+that hinder coordination among these agents. A key challenge identified is that
+LLMs tend to overlook relevant information included early in the prompt,
+especially when it is distant from the specific planning query. To address this,
+the authors incorporate an explicit belief state into the prompt to re-emphasize
+task-relevant details, thereby improving coherence and decision-making.
+Furthermore, the lack of explicit belief representation often leads to
+hallucinations, where agents generate inconsistent or incorrect outputs. The
+proposed method improves collaboration by enabling agents to form and maintain
+more accurate beliefs. Finally, the study notes that while LLM-based agents can
+rapidly propagate misinformation—particularly when they struggle to track
+information flow during ToM inference.
+
+- **[Theory of Mind for Multi-Agent Collaboration via Large Language Models](https://doi.org/10.18653/v1/2023.emnlp-main.13)**  
+  Huao Li, Yu Chong, Simon Stepputtis, Joseph Campbell, Dana Hughes, Charles Lewis, Katia Sycara, Kalika Bali (2023)  
+  In *Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP)*, Singapore, pp. 180–192.  
+  Association for Computational Linguistics.
+
 ### Social Simulation
 
 LLMs can simulate realistic perceptions, reasoning, and decision-making, react
-- 
GitLab