From fd79cf2ede012566e2087026b52a8e5fe78133ef Mon Sep 17 00:00:00 2001
From: Maxime MORGE <maxime.morge@univ-lille.fr>
Date: Sun, 16 Feb 2025 15:21:50 +0100
Subject: [PATCH] LLM4AAMAS: entry description

---
 README.md | 200 +++++++++++++++++++++++++++++++++++++++---------------
 1 file changed, 146 insertions(+), 54 deletions(-)

diff --git a/README.md b/README.md
index 93585ae..f204cb0 100644
--- a/README.md
+++ b/README.md
@@ -10,75 +10,123 @@ to generative AAMAS. This list is a work in progress and will be regularly updat
 
 ## Artificial Intelligence
 
-- **[Intelligence artificielle : une approche moderne (4e édition)](https://hal.archives-ouvertes.fr/hal-04245057)**
-   *Stuart Russell, Peter Norvig, Fabrice Popineau, Laurent Miclet, Claire Cadet (2021)*
-   Publisher: Pearson France
+- Artificial Intelligence (AI) involves the analysis, design, implementation,
+  and optimization of methods to enable machines to reproduce or simulate human
+  intelligence.
 
-- **[Apprentissage artificiel - 3e édition : Deep learning, concepts et algorithmes](https://www.eyrolles.com/)**
-   *Antoine Cornuéjols, Laurent Miclet, Vincent Barra (2018)*
-   Publisher: Eyrolles
+    **[Intelligence artificielle : une approche moderne (4e
+    édition)](https://hal.archives-ouvertes.fr/hal-04245057)** Stuart Russell,
+    Peter Norvig, Fabrice Popineau, Laurent Miclet, Claire Cadet (2021)
+    Publisher: Pearson France
+
+- Machine learning aims to give machines the ability to improve their
+  performance in solving tasks.
+
+    **[Apprentissage artificiel - 3e édition : Deep learning, concepts et
+    algorithmes](https://www.eyrolles.com/)** *Antoine Cornuéjols, Laurent
+    Miclet, Vincent Barra (2018)* Publisher: Eyrolles
 
 
 ## Neural networks (RNN, Transformers)
 
-- **[Learning representations by back-propagating errors](https://doi.org/10.1038/323533a0)**
-   *David E. Rumelhart, Geoffrey E. Hinton, Ronald J. Williams (1986)*
-   Published in *Nature*
+- The back-propagation method adjusts the connection weights by propagating
+  errors backward from the output layer to the input layer, aiming to minimize
+  errors and achieve a classification as close as possible to the optimum.
 
--  **[ImageNet Classification with Deep Convolutional Neural Networks](https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks)**
-   *Alex Krizhevsky, Ilya Sutskever, Geoffrey E. Hinton (2012)*
-   Presented at *NeurIPS*
+    **[Learning representations by back-propagating
+    errors](https://doi.org/10.1038/323533a0)** *David E. Rumelhart, Geoffrey E.
+    Hinton, Ronald J. Williams (1986)* Published in *Nature*
+
+- This approach has halved the image classification error rate on the ImageNet dataset.
+
+    **[ImageNet Classification with Deep Convolutional Neural
+   Networks](https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks)**
+   Alex Krizhevsky, Ilya Sutskever, Geoffrey E. Hinton (2012) Presented at *NeurIPS*
 
 
 ## Large Language Models
 
-- **[A Survey of Large Language Models](https://arxiv.org/abs/2303.18223)**
-    *Wayne Xin Zhao, Kun Zhou, Junyi Li, et al. (2024)*
-    Published on *arXiv*
+- The literature review of the recent advances in LLMs shown that scaling can
+  largely improve the model capacity
+
+    **[A Survey of Large Language Models](https://arxiv.org/abs/2303.18223)**
+    Wayne Xin Zhao, Kun Zhou, Junyi Li, et al. (2024) Published on *arXiv*
+
+- Based on the planning and reasoning abilities of LLM, the paper consider
+  LLM-based multi-agent systems for complex problem-solving and world
+  simulation.
+
+    **[Large Language Model based Multi-Agents: A Survey of Progress and
+    Challenges](https://arxiv.org/abs/2402.01680)** Taicheng Guo et al. (2024)
+    Published on *arXiv* arXiv:2402.01680 [cs.CL]
 
-- **[Large Language Model based Multi-Agents: A Survey of Progress and
-  Challenges](https://arxiv.org/abs/2402.01680)** *Taicheng Guo et al. (2024)*
-  Published on *arXiv* arXiv:2402.01680 [cs.CL]
+- A framework for achieving strong natural language understanding with a single
+  task-agnostic model through generative pre-training and discriminative
+  fine-tuning.
 
-- **[Improving language understanding by generative
+    **[Improving language understanding by generative
     pre-training](https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf)**
-    *Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever (2018)*
+    Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever (2018)
     Published by OpenAI
 
-- **[BERT: Pre-training of Deep Bidirectional Transformers for Language
-   Understanding](https://www.aclweb.org/anthology/N19-1423/)** *Jacob Devlin,
-   Ming-Wei Chang, Kenton Lee, Kristina Toutanova (2019)* Presented at
-   *NAACL-HLT*
+- A language model pre-trained on large unlabeled corpora.
 
-- **[Sequence to Sequence Learning with Neural
-   Networks](https://arxiv.org/abs/1409.3215)** *Ilya Sutskever, Oriol Vinyals,
-   Quoc V. Le (2014)* Published on *arXiv*
+    **[BERT: Pre-training of Deep Bidirectional Transformers for Language
+    Understanding](https://www.aclweb.org/anthology/N19-1423/)** *Jacob Devlin,
+    Ming-Wei Chang, Kenton Lee, Kristina Toutanova (2019)* Presented at
+    *NAACL-HLT*
 
-- **[Learning Phrase Representations using RNN Encoder-Decoder for Statistical
-    Machine Translation](https://arxiv.org/abs/1406.1078)** *Kyunghyun Cho, Bart
-    van Merrienboer, Caglar Gulcehre, et al. (2014)* Published on *arXiv*
+- Recurrent Neural Networks (RNNs), specifically designed to process sequential data,
+  can capture contextual relationships between elements of a text, known as
+  tokens.
+
+    **[Sequence to Sequence Learning with Neural
+    Networks](https://arxiv.org/abs/1409.3215)** *Ilya Sutskever, Oriol Vinyals,
+    Quoc V. Le (2014)* Published on *arXiv*
+
+- The flexibility of RNN allows for the alignment of contextual representations,
+  thus overcoming the limitations of word-for-word translation.
+
+    **[Learning Phrase Representations using RNN Encoder-Decoder for Statistical
+    Machine Translation](https://arxiv.org/abs/1406.1078)** *Kyunghyun Cho,
+    Bartvan Merrienboer, Caglar Gulcehre, et al. (2014)* Published on *arXiv*
 
 ## Tuning
 
 ### Instruction tuning
 
-- **[LoRA: Low-Rank Adaptation of Large Language Models](https://arxiv.org/abs/2106.09685)**
-    *Edward J. Hu, Yelong Shen, Phillip Wallis, et al. (2021)*
-    Published on *arXiv*
+- The fine-tuning of a pre-trained language model requires significantly fewer
+  data and computational resources, especially when parameter-efficient
+  approaches such as Low-Rank Adaptation (LoRA) are used.
 
-- **[Language Models are Few-Shot
-   Learners](https://papers.nips.cc/paper/2020/file/fc2c7f9a3f3f86cde5d8ad2c7f7e57b2-Paper.pdf)**
-   *Tom Brown, Benjamin Mann, Nick Ryder, et al. (2020)* Presented at *NeurIPS*
+    **[LoRA: Low-Rank Adaptation of Large Language
+    Models](https://arxiv.org/abs/2106.09685)** Edward J. Hu, Yelong Shen,
+    Phillip Wallis, et al. (2021)* Published on *arXiv*
+
+- The apparent mastery of textual understanding by LLMs closely resembles human
+  performance.
+
+    **[Language Models are Few-Shot
+    Learners](https://papers.nips.cc/paper/2020/file/fc2c7f9a3f3f86cde5d8ad2c7f7e57b2-Paper.pdf)**
+    Tom Brown, Benjamin Mann, Nick Ryder, et al. (2020)* Presented at *NeurIPS*
 
 ### Alignement tuning
 
-- **[Training language models to follow instructions with human
+- Instruction tuning aims to bridge the gap between the model’s original
+  objective — generating text — and user expectations, where users want the
+  model to follow their instructions and perform specific tasks.
+
+    **[Training language models to follow instructions with human
    feedback](https://papers.nips.cc/paper/2022/hash/17f4c5f98073d1fb95f7e53f5c7fdb64-Abstract.html)**
    *Long Ouyang, Jeffrey Wu, Xu Jiang, et al. (2022)* Presented at *NeurIPS*
 
-- [Strong and weak alignment of large language models with human
-  value](https://doi.org/10.1038/s41598-024-70031-3). Khamassi, M., Nahon, M. &
-  Chatila, R. *Sci Rep* **14**, 19399 (2024).
+- Strong alignment requires cognitive abilities such as understanding and
+  reasoning about agents’ intentions and their ability to causally produce
+  desired effects.
+
+    **[Strong and weak alignment of large language models with human
+    value](https://doi.org/10.1038/s41598-024-70031-3)** Khamassi, M., Nahon, M.
+    & Chatila, R. *Sci Rep** **14**, 19399 (2024).
 
 ## Existing LLMs
 
@@ -130,6 +178,9 @@ Many models are available at the following URLs:
 
 ### ICL
 
+In-context learning involves providing the model with specific information
+without requiring additional training.
+
 - **A Survey on In-context Learning** *Qingxiu Dong, Lei Li, Damai Dai, Ce
   Zheng, Jingyuan Ma, Rui Li, Heming Xia, Jingjing Xu, Zhiyong Wu, Baobao Chang,
   Xu Sun, Lei Li, Zhifang Sui (2024)* Presented at the *Conference on Empirical
@@ -138,14 +189,22 @@ Many models are available at the following URLs:
 
 ### CoT
 
+Chain-of-thought is a prompting strategy that, instead of being limited to
+input-output pairs, incorporates intermediate reasoning steps that serve as a
+link between the inputs and the output.
+
 - **[Chain-of-Thought Prompting Elicits Reasoning in Large Language
   Models](https://papers.nips.cc/paper/52604-chain-of-thought-prompting-elicits-reasoning-in-large-language-models)**
   *Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, et al. (2022)*
   Presented at *NeurIPS*
 
-
 ### RAG
 
+Retrieval-Augmented Generation (RAG) is a prompting strategy that involves
+integrating relevant information from external data sources into the
+instructions to enhance the model’s responses using specific and/or recent
+knowledge.
+
 - **[Retrieval-Augmented Generation for Large Language Models: A
   Survey](https://arxiv.org/abs/2312.10997)** *Yunfan Gao, Yun Xiong, Xinyu Gao,
   Kangxiang Jia, Jinliu Pan, Yuxi Bi, Yi Dai, Jiawei Sun, Meng Wang, Haofen Wang
@@ -153,11 +212,22 @@ Many models are available at the following URLs:
 
 ## Generative Autonomous Agents
 
-- **[A Survey on Large Language Model Based Autonomous Agents](https://arxiv.org/abs/2308.11432)** Lei Wang, Chen
-  Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, Zhiyuan Chen, Jiakai
-  Tang, Xu Chen, Yankai Lin, Wayne Xin Zhao, Zhewei Wei, Jirong Wen (2024)*
-  Published in *Frontiers of Computer Science* (Volume 18, Issue 6, Pages
-  186345) Publisher: Springer
+Leveraging the commonsense knowledge integrated into LLMs represents a promising
+solution to equip autonomous agents with the capabilities necessary to adapt to
+new tasks, while reducing reliance on knowledge engineering or trial-and-error
+learning.
+
+- **[A Survey on Large Language Model Based Autonomous
+    Agents](https://arxiv.org/abs/2308.11432)** Lei Wang, Chen Ma, Xueyang Feng,
+    Zeyu Zhang, Hao Yang, Jingsen Zhang, Zhiyuan Chen, Jiakai Tang, Xu Chen,
+    Yankai Lin, Wayne Xin Zhao, Zhewei Wei, Jirong Wen (2024) Published in
+    *Frontiers of Computer Science* (Volume 18, Issue 6, Pages 186345)
+    Publisher: Springer
+
+Multiple works aim to equip LLMs with the ability to use external tools, such as
+a calculator, a calendar, a DBMS, a code interpreter, a search engine, a machine
+translation tool, a question-answering system, or an AI tool.
+
 
 - **[HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in Hugging
   Face](https://papers.nips.cc/paper/2023/hash/38154-hugginggpt-solving-ai-tasks-with-chatgpt-and-its-friends-in-hugging-face.pdf)**
@@ -169,25 +239,33 @@ Many models are available at the following URLs:
    *Timo Schick, Jane Dwivedi-Yu, Roberto Dessi, Roberta Raileanu, et al. (2023)*
    Presented at *NeurIPS*
 
+
+To react autonomously in an environment, a generative agent must interpret its
+perceptions (e.g., a user request) based on the knowledge stored in its memory,
+reason, and plan actions. It must execute the plan step by step with the help of
+tools and refine the plan based on feedback from the environment.
+
 - **[Cognitive Architectures for Language Agents](https://arxiv.org/abs/2309.02427)**
     *Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths (2024)*
     Published on *arXiv*
 
 
--  **[Large language models as simulated economic agents: What can we learn from
+LLMs have the ability to emulate a real human in certain experiments in
+experimental economics or social psychology.
+
+- **[Large language models as simulated economic agents: What can we learn from
    homo silicus?](https://www.nber.org/papers/w31122)** Horton, J. J. (2023).
-   National Bureau of Economic Research.   
+   National Bureau of Economic Research.
+
+AGENTBENCH is a systematically designed multi-dimensional evolving benchmark for
+evaluating LLMs as agents which measures a significant performance gap between
+these top-tier models and their OSS competitors.
 
 - ***[AgentBench: Evaluating LLMs as
   Agents](https://openreview.net/forum?id=zAdUB0aCTQ)**. Xiao Li et al. Poster.
   Proc. of 12th International Conference on Learning Representations (ICLR),
   Vienna, Austria, May 7-11, 2024.
 
-    AGENTBENCH a systematically designed multi-dimensional evolving benchmark
-    for evaluating LLMs as agents which measure a significant performance gap
-    between these top-tier models and their OSS competitors.
-
-
 ### Generative Autonomous Agents on the shelf
 
 - [LangChain](https://www.langchain.com) is an open-source framework for
@@ -207,6 +285,10 @@ Many models are available at the following URLs:
 
 ## Generative MAS
 
+LLMs can simulate realistic perceptions, reasoning, and decision-making, react
+adaptively to environments without predefined explicit instructions by adjusting
+their responses through contextual learning mechanisms, autonomously generate
+objectives, and interact and communicate in natural language.
 
 - **[Large language models empowered agent-based modeling and simulation: A
   survey and perspectives](https://doi.org/10.1057/s41599-024-01235-9)** **Chen
@@ -216,6 +298,9 @@ Many models are available at the following URLs:
   [repository](https://github.com/tsinghua-fib-lab/LLM-Agent-Based-Modeling-and-Simulation).
 
 
+Simulacra studies the emergent social behaviors of a generative multi-agent
+simulation in an environment inspired by The Sims.
+
 - **[Social Simulacra: Creating Populated Prototypes for Social Computing
   Systems](https://dl.acm.org/doi/10.1145/3526110.3545617)** *Joon Sung Park,
   Lindsay Popowski, Carrie Cai, Meredith Ringel Morris, Percy Liang, Michael S.
@@ -232,17 +317,24 @@ Many models are available at the following URLs:
   Symposium on User Interface Software and Technology* Articleno: 2, Pages: 22,
   Location: San Francisco, CA, USA, Series: UIST '23
 
+AGENTVERSE is a general multi-agent framework that simulates problem-solving
+procedures of human groups.
+
 - **[Agentverse: Facilitating multi-agent collaboration and exploring emergent
   behaviors](https://openreview.net/forum?id=HywBMyh6JGR)** *Weize Chen, Yusheng
   Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu,
   Yi-Hsin Hung, Chen Qian, et al. (2023)* Published in *The Twelfth
   International Conference on Learning Representations (ICLR 2023)*
 
+An open-source platform to simulate a human society.
+
 - **[Training socially aligned language models on simulated social
   interactions](https://arxiv.org/abs/2305.16960)** *Ruibo Liu, Ruixin Yang,
   Chenyan Jia, Ge Zhang, Denny Zhou, Andrew M. Dai, Diyi Yang, Soroush Vosoughi
   (2023)* Published on *arXiv* arXiv:2305.16960
 
+A simulation of the propagation processes in a social network.
+
 - **[S3: Social-network Simulation System with Large Language Model-Empowered
   Agents](https://arxiv.org/abs/2307.14984)** *Chen Gao, Xiaochong Lan, Zhihong
   Lu, Jinzhu Mao, Jinghua Piao, Huandong Wang, Depeng Jin, Yong Li (2023)*
@@ -271,7 +363,7 @@ Many models are available at the following URLs:
   where a lead Orchestrator agent is responsible for high-level planning,
   directing other agents and tracking task progress.
 
-- [CrewAI](https://github.com/crewAIInc/crewAI) combines LLM-based agent with precise control flow.   
+- [CrewAI](https://github.com/crewAIInc/crewAI) combines LLM-based agent with precise control flow.
 
 - [Agno](https://github.com/agno-agi/agno) is a lightweight framework for
   building generative multi-agent systems with workflows.
-- 
GitLab