diff --git a/doc/paper/ICTAI25/abstract.txt b/doc/paper/ICTAI25/abstract.txt
index c640a008026a87f6723756d9bd8bb6e656eb8d3b..eb7579fe4e048e84be3cea19a113f8aea5fddebd 100644
--- a/doc/paper/ICTAI25/abstract.txt
+++ b/doc/paper/ICTAI25/abstract.txt
@@ -1,9 +1 @@
-Recent advances in Large Language Models (LLMs) have enabled the creation of
-Generative Agents (GAs) capable of autonomous decision-making in interactive
-settings. This paper investigates whether GAs can exhibit socially credible
-behavior. Drawing from behavioral game theory, we evaluate five state-of-the-art
-models across three canonical game-theoretic environments. Our results show that
-while some GAs can accurately predict their opponent’s behavior, few are able to
-incorporate those predictions into decision-making. These behavioral flaws help
-explain why coordination remains especially challenging: most models struggle to
-align with others, even when communication is allowed.
+Recent advances in Large Language Models (LLMs) have enabled the creation of Generative Agents (GAs) capable of autonomous decision-making in interaction. This paper investigates whether GAs can exhibit socially credible behavior. Drawing from behavioral game theory, we evaluate five state-of-the-art models across three canonical game-theoretic environments. Our results show that, while some GAs can accurately predict their opponent’s behavior, few are able to incorporate those predictions into decision-making. These behavioral flaws help explain why coordination remains especially challenging: most models struggle to align with others, even when communication is allowed.