# LLM4AAMAS

Generative Autonomous Agents and Multi-Agent Systems (AAMAS) offer promising
opportunities for solving problems in open environments and simulating complex
social dynamics.

This repository contains a collection of papers and ressources related
to generative AAMAS. This list is a work in progress and will be regularly updated with new resources.


## Artificial Intelligence

- **[Intelligence artificielle : une approche moderne (4e édition)](https://hal.archives-ouvertes.fr/hal-04245057)**
   *Stuart Russell, Peter Norvig, Fabrice Popineau, Laurent Miclet, Claire Cadet (2021)*
   Publisher: Pearson France

- **[Apprentissage artificiel - 3e édition : Deep learning, concepts et algorithmes](https://www.eyrolles.com/)**
   *Antoine Cornuéjols, Laurent Miclet, Vincent Barra (2018)*
   Publisher: Eyrolles


## Neural networks (RNN, Transformers)

- **[Learning representations by back-propagating errors](https://doi.org/10.1038/323533a0)**
   *David E. Rumelhart, Geoffrey E. Hinton, Ronald J. Williams (1986)*
   Published in *Nature*

-  **[ImageNet Classification with Deep Convolutional Neural Networks](https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks)**
   *Alex Krizhevsky, Ilya Sutskever, Geoffrey E. Hinton (2012)*
   Presented at *NeurIPS*


## Large Language Models

- **[A Survey of Large Language Models](https://arxiv.org/abs/2303.18223)**
    *Wayne Xin Zhao, Kun Zhou, Junyi Li, et al. (2024)*
    Published on *arXiv*

- **[Large Language Model based Multi-Agents: A Survey of Progress and
  Challenges](https://arxiv.org/abs/2402.01680)** *Taicheng Guo et al. (2024)*
  Published on *arXiv* arXiv:2402.01680 [cs.CL]

- **[Improving language understanding by generative
    pre-training](https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf)**
    *Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever (2018)*
    Published by OpenAI

- **[BERT: Pre-training of Deep Bidirectional Transformers for Language
   Understanding](https://www.aclweb.org/anthology/N19-1423/)** *Jacob Devlin,
   Ming-Wei Chang, Kenton Lee, Kristina Toutanova (2019)* Presented at
   *NAACL-HLT*

- **[Sequence to Sequence Learning with Neural
   Networks](https://arxiv.org/abs/1409.3215)** *Ilya Sutskever, Oriol Vinyals,
   Quoc V. Le (2014)* Published on *arXiv*

- **[Learning Phrase Representations using RNN Encoder-Decoder for Statistical
    Machine Translation](https://arxiv.org/abs/1406.1078)** *Kyunghyun Cho, Bart
    van Merrienboer, Caglar Gulcehre, et al. (2014)* Published on *arXiv*

## Tuning

### Instruction tuning

- **[LoRA: Low-Rank Adaptation of Large Language Models](https://arxiv.org/abs/2106.09685)**
    *Edward J. Hu, Yelong Shen, Phillip Wallis, et al. (2021)*
    Published on *arXiv*

- **[Language Models are Few-Shot
   Learners](https://papers.nips.cc/paper/2020/file/fc2c7f9a3f3f86cde5d8ad2c7f7e57b2-Paper.pdf)**
   *Tom Brown, Benjamin Mann, Nick Ryder, et al. (2020)* Presented at *NeurIPS*

### Alignement tuning

- **[Training language models to follow instructions with human
   feedback](https://papers.nips.cc/paper/2022/hash/17f4c5f98073d1fb95f7e53f5c7fdb64-Abstract.html)**
   *Long Ouyang, Jeffrey Wu, Xu Jiang, et al. (2022)* Presented at *NeurIPS*

## Existing LLMs

Many models are available at the following URLs:
[https://ollama.com](https://ollama.com),
[https://www.nomic.ai/gpt4all](https://www.nomic.ai/gpt4all) and
[https://huggingface.co/models](https://huggingface.co/models).

- **[GPT-4 Technical Report](https://arxiv.org/abs/2303.08774)**
    *OpenAI Team (2024)*
    Published on *arXiv*

- **[The Llama 3 Herd of Models](https://arxiv.org/abs/2407.21783)**
    *Meta Team (2024)*
    Published on *arXiv*

- **[Stanford Alpaca: An Instruction-Following LLaMa Model](https://github.com/tatsu-lab/stanford_alpaca)**
    *Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, et al. (2023)*
    Published on *GitHub*

- **[Gemma 2: Improving Open Language Models at a Practical Size](https://arxiv.org/abs/2408.00118)**
  *Google AI Team (2024)*
  Published on *arXiv*

- **[DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning](https://arxiv.org/abs/2501.12948)**
  *DeepSeek-AI (2025)*
  Published on *arXiv*

- **[Mixtral of Experts](https://arxiv.org/abs/2401.04088)**
    *Mistral AI team (2024)*
    Published on *arXiv*

- **[Mistral 7B](https://arxiv.org/abs/2310.06825)**
    *Mistral AI team (2023)*
    Published on *arXiv*

- **[The Lucie-7B LLM and the Lucie Training Dataset: Open Resources for
  Multilingual Language Generation](https://arxiv.org/abs/)** *Olivier Gouvert,
  Julie Hunter, Jérôme Louradour, Evan Dufraisse, Yaya Sy, Pierre-Carl Langlais,
  Anastasia Stasenko, Laura Rivière, Christophe Cerisara, Jean-Pierre Lorré
  (2025)*


## Prompt engineering

### ICL

- **A Survey on In-context Learning** *Qingxiu Dong, Lei Li, Damai Dai, Ce
  Zheng, Jingyuan Ma, Rui Li, Heming Xia, Jingjing Xu, Zhiyong Wu, Baobao Chang,
  Xu Sun, Lei Li, Zhifang Sui (2024)* Presented at the *Conference on Empirical
  Methods in Natural Language Processing (EMNLP)* Location: Miami, Florida, USA
  Published by: Association for Computational Linguistics

### CoT

- **[Chain-of-Thought Prompting Elicits Reasoning in Large Language
  Models](https://papers.nips.cc/paper/52604-chain-of-thought-prompting-elicits-reasoning-in-large-language-models)**
  *Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, et al. (2022)*
  Presented at *NeurIPS*


### RAG

- **[Retrieval-Augmented Generation for Large Language Models: A
  Survey](https://arxiv.org/abs/2312.10997)** *Yunfan Gao, Yun Xiong, Xinyu Gao,
  Kangxiang Jia, Jinliu Pan, Yuxi Bi, Yi Dai, Jiawei Sun, Meng Wang, Haofen Wang
  (2024)* Published on *arXiv*

## Generative Autonomous Agents

- **A Survey on Large Language Model Based Autonomous Agents** Lei Wang, Chen
  Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, Zhiyuan Chen, Jiakai
  Tang, Xu Chen, Yankai Lin, Wayne Xin Zhao, Zhewei Wei, Jirong Wen (2024)*
  Published in *Frontiers of Computer Science* (Volume 18, Issue 6, Pages
  186345) Publisher: Springer


- **[HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in Hugging
  Face](https://papers.nips.cc/paper/2023/hash/38154-hugginggpt-solving-ai-tasks-with-chatgpt-and-its-friends-in-hugging-face.pdf)**
  *Yongliang Shen, Kaitao Song, Xu Tan, Dongsheng Li, Weiming Lu, Yueting Zhuang
  (2023)* Presented at *Advances in Neural Information Processing Systems
  (NeurIPS)* Pages: 38154–38180 Publisher: Curran Associates, Inc. Volume: 36

- **[Toolformer: Language Models Can Teach Themselves to Use Tools](https://papers.nips.cc/paper/86759-toolformer-language-models-can-teach-themselves-to-use-tools)**
   *Timo Schick, Jane Dwivedi-Yu, Roberto Dessi, Roberta Raileanu, et al. (2023)*
   Presented at *NeurIPS*

- **[Cognitive Architectures for Language Agents](https://arxiv.org/abs/2309.02427)**
    *Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths (2024)*
    Published on *arXiv*


### Generative Autonomous Agents on the shelf

- [LangChain](https://www.langchain.com) is an open-source framework for
  designing prompts for LLMs. It can be used to define high-level reasoning
  sequences, conversational agents, RAGs (Retrieval-Augmented Generation),
  document summaries, or even the generation of synthetic data.

- [LangGraph](https://langchain-ai.github.io/langgraph) is a low-level library
  for the design of cognitive architecture for autonomous agents, whose
  reasoning engine is an LLM.

- [AutoGPT](https://github.com/Significant-Gravitas/AutoGPT) is a platform for
  the creation, deployment, and management of generative agents.

- [WorkGPT](https://github.com/team-openpm/workgpt) is similar to AutoGPT


## Generative MAS


- **[Large language models empowered agent-based modeling and simulation: A
  survey and perspectives](https://doi.org/10.1057/s41599-024-01235-9)** **Chen
  Gao, Xiaochong Lan, Nian Li, Yuan Yuan, Jingtao Ding, Zhilun Zhou, Fengli Xu,
  Yong Li (2024)* Published in *Humanities and Social Sciences Communications*,
  Volume 11, Issue 1, Pages 1–24. The
  [repository](https://github.com/tsinghua-fib-lab/LLM-Agent-Based-Modeling-and-Simulation).


- **[Social Simulacra: Creating Populated Prototypes for Social Computing
  Systems](https://dl.acm.org/doi/10.1145/3526110.3545617)** *Joon Sung Park,
  Lindsay Popowski, Carrie Cai, Meredith Ringel Morris, Percy Liang, Michael S.
  Bernstein (2022)* Published in *Proceedings of the 35th Annual ACM Symposium
  on User Interface Software and Technology* Articleno: 74, Pages: 18, Location:
  Bend, OR, USA

- **[Generative Agents: Interactive Simulacra of Human
  Behavior](https://dl.acm.org/doi/10.1145/3586184.3594067)** *Joon Sung Park,
  Joseph O'Brien, Carrie Jun Cai, Meredith Ringel Morris, Percy Liang, Michael
  S. Bernstein (2023)* Published in *Proceedings of the 36th Annual ACM
  Symposium on User Interface Software and Technology* Articleno: 2, Pages: 22,
  Location: San Francisco, CA, USA, Series: UIST '23

- **[Agentverse: Facilitating multi-agent collaboration and exploring emergent
  behaviors](https://openreview.net/forum?id=HywBMyh6JGR)** *Weize Chen, Yusheng
  Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu,
  Yi-Hsin Hung, Chen Qian, et al. (2023)* Published in *The Twelfth
  International Conference on Learning Representations (ICLR 2023)*

- **[Training socially aligned language models on simulated social
  interactions](https://arxiv.org/abs/2305.16960)** *Ruibo Liu, Ruixin Yang,
  Chenyan Jia, Ge Zhang, Denny Zhou, Andrew M. Dai, Diyi Yang, Soroush Vosoughi
  (2023)* Published on *arXiv* arXiv:2305.16960

- **[S3: Social-network Simulation System with Large Language Model-Empowered
  Agents](https://arxiv.org/abs/2307.14984)** *Chen Gao, Xiaochong Lan, Zhihong
  Lu, Jinzhu Mao, Jinghua Piao, Huandong Wang, Depeng Jin, Yong Li (2023)*
  Published on *arXiv* arXiv:2307.14984

### Generative MAS on the shelf

- [MetaGPT](https://github.com/geekan/MetaGPT) is a framework for creating
  generative MAS dedicated to software development.

- [CAMEL](https://github.com/camel-ai/camel) proposes a generative multi-agent
  framework for accomplishing complex tasks.

- [AutoGen](https://github.com/microsoft/autogen) is a versatile open-source
  framework for creating generative multi-agent systems.

## Authors

Maxime MORGE

## License

This program is free software: you can redistribute it and/or modify it under
the terms of the GNU General Public License as published by the Free Software
Foundation, either version 3 of the License, or (at your option) any later
version.

This program is distributed in the hope that it will be useful, but WITHOUT ANY
WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A
PARTICULAR PURPOSE. See the GNU General Public License for more details.

You should have received a copy of the GNU General Public License along with
this program. If not, see <http://www.gnu.org/licenses/>.