From 3d22955be2a560625006db984c08b4e6373734dd Mon Sep 17 00:00:00 2001
From: mmorge <maxime.morge@univ-lyon1.fr>
Date: Tue, 10 Jun 2025 08:53:34 +0200
Subject: [PATCH] PyGAAMAS: Minor corrections in abstract.txt

---
 doc/paper/ICTAI25/abstract.txt | 10 +---------
 1 file changed, 1 insertion(+), 9 deletions(-)

diff --git a/doc/paper/ICTAI25/abstract.txt b/doc/paper/ICTAI25/abstract.txt
index c640a00..eb7579f 100644
--- a/doc/paper/ICTAI25/abstract.txt
+++ b/doc/paper/ICTAI25/abstract.txt
@@ -1,9 +1 @@
-Recent advances in Large Language Models (LLMs) have enabled the creation of
-Generative Agents (GAs) capable of autonomous decision-making in interactive
-settings. This paper investigates whether GAs can exhibit socially credible
-behavior. Drawing from behavioral game theory, we evaluate five state-of-the-art
-models across three canonical game-theoretic environments. Our results show that
-while some GAs can accurately predict their opponent’s behavior, few are able to
-incorporate those predictions into decision-making. These behavioral flaws help
-explain why coordination remains especially challenging: most models struggle to
-align with others, even when communication is allowed.
+Recent advances in Large Language Models (LLMs) have enabled the creation of Generative Agents (GAs) capable of autonomous decision-making in interaction. This paper investigates whether GAs can exhibit socially credible behavior. Drawing from behavioral game theory, we evaluate five state-of-the-art models across three canonical game-theoretic environments. Our results show that, while some GAs can accurately predict their opponent’s behavior, few are able to incorporate those predictions into decision-making. These behavioral flaws help explain why coordination remains especially challenging: most models struggle to align with others, even when communication is allowed.
-- 
GitLab