Skip to content
GitLab
Explore
Sign in
Primary navigation
Search or go to…
Project
LLM4AAMAS
Manage
Activity
Members
Labels
Plan
Issues
0
Issue boards
Milestones
Wiki
Code
Merge requests
0
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Snippets
Build
Pipelines
Jobs
Pipeline schedules
Artifacts
Deploy
Releases
Package Registry
Model registry
Operate
Environments
Terraform modules
Monitor
Incidents
Analyze
Value stream analytics
Contributor analytics
CI/CD analytics
Repository analytics
Model experiments
Help
Help
Support
GitLab documentation
Compare GitLab plans
Community forum
Contribute to GitLab
Provide feedback
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
Maxime Morge
LLM4AAMAS
Commits
bc904b92
Commit
bc904b92
authored
2 months ago
by
Maxime MORGE
Browse files
Options
Downloads
Patches
Plain Diff
LLM4AAMAS : rm HTML
parent
7c72e9d4
No related branches found
No related tags found
No related merge requests found
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
README.html
+0
-272
0 additions, 272 deletions
README.html
with
0 additions
and
272 deletions
README.html
deleted
100644 → 0
+
0
−
272
View file @
7c72e9d4
<?xml version="1.0" encoding="UTF-8" ?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html
xmlns=
"http://www.w3.org/1999/xhtml"
>
<head>
<title>
README.html
</title>
<meta
http-equiv=
"Content-Type"
content=
"text/html;charset=utf-8"
/>
</head>
<body>
<h1
id=
"llm4aamas"
>
LLM4AAMAS
</h1>
<p>
Generative Autonomous Agents and Multi-Agent Systems (AAMAS) offer
promising opportunities for solving problems in open environments and
simulating complex social dynamics.
</p>
<p>
This repository contains a collection of papers and ressources
related to generative AAMAS. This list is a work in progress and will be
regularly updated with new resources.
</p>
<h2
id=
"artificial-intelligence"
>
Artificial Intelligence
</h2>
<ul>
<li><p><strong><a
href=
"https://hal.archives-ouvertes.fr/hal-04245057"
>
Intelligence
artificielle : une approche moderne (4e édition)
</a></strong>
<em>
Stuart
Russell, Peter Norvig, Fabrice Popineau, Laurent Miclet, Claire Cadet
(2021)
</em>
Publisher: Pearson France
</p></li>
<li><p><strong><a
href=
"https://www.eyrolles.com/"
>
Apprentissage
artificiel - 3e édition : Deep learning, concepts et
algorithmes
</a></strong>
<em>
Antoine Cornuéjols, Laurent Miclet, Vincent
Barra (2018)
</em>
Publisher: Eyrolles
</p></li>
</ul>
<h2
id=
"neural-networks-rnn-transformers"
>
Neural networks (RNN,
Transformers)
</h2>
<ul>
<li><p><strong><a
href=
"https://doi.org/10.1038/323533a0"
>
Learning
representations by back-propagating errors
</a></strong>
<em>
David E.
Rumelhart, Geoffrey E. Hinton, Ronald J. Williams (1986)
</em>
Published
in
<em>
Nature
</em></p></li>
<li><p><strong><a
href=
"https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks"
>
ImageNet
Classification with Deep Convolutional Neural Networks
</a></strong>
<em>
Alex Krizhevsky, Ilya Sutskever, Geoffrey E. Hinton (2012)
</em>
Presented at
<em>
NeurIPS
</em></p></li>
</ul>
<h2
id=
"large-language-models"
>
Large Language Models
</h2>
<ul>
<li><p><strong><a
href=
"https://arxiv.org/abs/2303.18223"
>
A Survey of
Large Language Models
</a></strong>
<em>
Wayne Xin Zhao, Kun Zhou, Junyi
Li, et al. (2024)
</em>
Published on
<em>
arXiv
</em></p></li>
<li><p><strong><a
href=
"https://arxiv.org/abs/2402.01680"
>
Large Language
Model based Multi-Agents: A Survey of Progress and
Challenges
</a></strong>
<em>
Taicheng Guo et al. (2024)
</em>
Published on
<em>
arXiv
</em>
arXiv:2402.01680 [cs.CL]
</p></li>
<li><p><strong><a
href=
"https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf"
>
Improving
language understanding by generative pre-training
</a></strong>
<em>
Alec
Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever (2018)
</em>
Published by OpenAI
</p></li>
<li><p><strong><a
href=
"https://www.aclweb.org/anthology/N19-1423/"
>
BERT: Pre-training of
Deep Bidirectional Transformers for Language Understanding
</a></strong>
<em>
Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova
(2019)
</em>
Presented at
<em>
NAACL-HLT
</em></p></li>
<li><p><strong><a
href=
"https://arxiv.org/abs/1409.3215"
>
Sequence to
Sequence Learning with Neural Networks
</a></strong>
<em>
Ilya Sutskever,
Oriol Vinyals, Quoc V. Le (2014)
</em>
Published on
<em>
arXiv
</em></p></li>
<li><p><strong><a
href=
"https://arxiv.org/abs/1406.1078"
>
Learning Phrase
Representations using RNN Encoder-Decoder for Statistical Machine
Translation
</a></strong>
<em>
Kyunghyun Cho, Bart van Merrienboer, Caglar
Gulcehre, et al. (2014)
</em>
Published on
<em>
arXiv
</em></p></li>
</ul>
<h2
id=
"tuning"
>
Tuning
</h2>
<h3
id=
"instruction-tuning"
>
Instruction tuning
</h3>
<ul>
<li><p><strong><a
href=
"https://arxiv.org/abs/2106.09685"
>
LoRA: Low-Rank
Adaptation of Large Language Models
</a></strong>
<em>
Edward J. Hu,
Yelong Shen, Phillip Wallis, et al. (2021)
</em>
Published on
<em>
arXiv
</em></p></li>
<li><p><strong><a
href=
"https://papers.nips.cc/paper/2020/file/fc2c7f9a3f3f86cde5d8ad2c7f7e57b2-Paper.pdf"
>
Language
Models are Few-Shot Learners
</a></strong>
<em>
Tom Brown, Benjamin Mann,
Nick Ryder, et al. (2020)
</em>
Presented at
<em>
NeurIPS
</em></p></li>
</ul>
<h3
id=
"alignement-tuning"
>
Alignement tuning
</h3>
<ul>
<li><strong><a
href=
"https://papers.nips.cc/paper/2022/hash/17f4c5f98073d1fb95f7e53f5c7fdb64-Abstract.html"
>
Training
language models to follow instructions with human feedback
</a></strong>
<em>
Long Ouyang, Jeffrey Wu, Xu Jiang, et al. (2022)
</em>
Presented at
<em>
NeurIPS
</em></li>
</ul>
<h2
id=
"existing-llms"
>
Existing LLMs
</h2>
<p>
Many models are available at the following URLs:
<br
/>
<a
href=
"https://www.nomic.ai/gpt4all"
>
https://www.nomic.ai/gpt4all
</a>
and
<br
/>
<a
href=
"https://huggingface.co/models"
>
https://huggingface.co/models
</a>
.
</p>
<ul>
<li><p><strong><a
href=
"https://arxiv.org/abs/2303.08774"
>
GPT-4
Technical Report
</a></strong>
<em>
OpenAI Team (2024)
</em>
Published on
<em>
arXiv
</em></p></li>
<li><p><strong><a
href=
"https://arxiv.org/abs/2407.21783"
>
The Llama 3
Herd of Models
</a></strong>
<em>
Meta Team (2024)
</em>
Published on
<em>
arXiv
</em></p></li>
<li><p><strong><a
href=
"https://github.com/tatsu-lab/stanford_alpaca"
>
Stanford Alpaca: An
Instruction-Following LLaMa Model
</a></strong>
<em>
Rohan Taori, Ishaan
Gulrajani, Tianyi Zhang, Yann Dubois, et al. (2023)
</em>
Published on
<em>
GitHub
</em></p></li>
<li><p><strong><a
href=
"https://arxiv.org/abs/2401.04088"
>
Mixtral of
Experts
</a></strong><br
/>
<em>
Mistral AI team (2024)
</em><br
/>
Published on
<em>
arXiv
</em></p></li>
<li><p><strong><a
href=
"https://arxiv.org/abs/2310.06825"
>
Mistral
7B
</a></strong><br
/>
<em>
Mistral AI team (2023)
</em><br
/>
Published on
<em>
arXiv
</em></p></li>
<li><p><strong><a
href=
"https://arxiv.org/abs/"
>
The Lucie-7B LLM and the
Lucie Training Dataset: Open Resources for Multilingual Language
Generation
</a></strong>
<em>
Olivier Gouvert, Julie Hunter, Jérôme
Louradour, Evan Dufraisse, Yaya Sy, Pierre-Carl Langlais, Anastasia
Stasenko, Laura Rivière, Christophe Cerisara, Jean-Pierre Lorré
(2025)
</em></p></li>
</ul>
<h2
id=
"prompt-engineering"
>
Prompt engineering
</h2>
<h3
id=
"icl"
>
ICL
</h3>
<ul>
<li><strong>
A Survey on In-context Learning
</strong>
<em>
Qingxiu Dong,
Lei Li, Damai Dai, Ce Zheng, Jingyuan Ma, Rui Li, Heming Xia, Jingjing
Xu, Zhiyong Wu, Baobao Chang, Xu Sun, Lei Li, Zhifang Sui (2024)
</em>
Presented at the
<em>
Conference on Empirical Methods in Natural Language
Processing (EMNLP)
</em>
Location: Miami, Florida, USA Published by:
Association for Computational Linguistics
</li>
</ul>
<h3
id=
"cot"
>
CoT
</h3>
<ul>
<li><strong><a
href=
"https://papers.nips.cc/paper/52604-chain-of-thought-prompting-elicits-reasoning-in-large-language-models"
>
Chain-of-Thought
Prompting Elicits Reasoning in Large Language Models
</a></strong>
<em>
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, et
al. (2022)
</em>
Presented at
<em>
NeurIPS
</em></li>
</ul>
<h3
id=
"rag"
>
RAG
</h3>
<ul>
<li><strong><a
href=
"https://arxiv.org/abs/2312.10997"
>
Retrieval-Augmented Generation
for Large Language Models: A Survey
</a></strong>
<em>
Yunfan Gao, Yun
Xiong, Xinyu Gao, Kangxiang Jia, Jinliu Pan, Yuxi Bi, Yi Dai, Jiawei
Sun, Meng Wang, Haofen Wang (2024)
</em>
Published on
<em>
arXiv
</em></li>
</ul>
<h2
id=
"generative-autonomous-agents"
>
Generative Autonomous Agents
</h2>
<ul>
<li><p><strong>
A Survey on Large Language Model Based Autonomous
Agents
</strong>
Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao Yang,
Jingsen Zhang, Zhiyuan Chen, Jiakai Tang, Xu Chen, Yankai Lin, Wayne Xin
Zhao, Zhewei Wei, Jirong Wen (2024)
<em>
Published in
</em>
Frontiers of
Computer Science* (Volume 18, Issue 6, Pages
</p>
<ol
start=
"186345"
type=
"1"
>
<li>
Publisher: Springer
</li>
</ol></li>
<li><p><strong><a
href=
"https://papers.nips.cc/paper/2023/hash/38154-hugginggpt-solving-ai-tasks-with-chatgpt-and-its-friends-in-hugging-face.pdf"
>
HuggingGPT:
Solving AI Tasks with ChatGPT and its Friends in Hugging
Face
</a></strong>
<em>
Yongliang Shen, Kaitao Song, Xu Tan, Dongsheng Li,
Weiming Lu, Yueting Zhuang (2023)
</em>
Presented at
<em>
Advances in
Neural Information Processing Systems (NeurIPS)
</em>
Pages: 38154–38180
Publisher: Curran Associates, Inc. Volume: 36
</p></li>
<li><p><strong><a
href=
"https://papers.nips.cc/paper/86759-toolformer-language-models-can-teach-themselves-to-use-tools"
>
Toolformer:
Language Models Can Teach Themselves to Use Tools
</a></strong>
<em>
Timo
Schick, Jane Dwivedi-Yu, Roberto Dessi, Roberta Raileanu, et
al. (2023)
</em>
Presented at
<em>
NeurIPS
</em></p></li>
<li><p><strong><a
href=
"https://arxiv.org/abs/2309.02427"
>
Cognitive
Architectures for Language Agents
</a></strong><br
/>
<em>
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L.
Griffiths (2024)
</em><br
/>
Published on
<em>
arXiv
</em></p></li>
</ul>
<h3
id=
"generative-autonomous-agents-on-the-shelf"
>
Generative Autonomous
Agents on the shelf
</h3>
<ul>
<li><p><a
href=
"https://www.langchain.com"
>
LangChain
</a>
is an
open-source framework for designing prompts for LLMs. It can be used to
define high-level reasoning sequences, conversational agents, RAGs
(Retrieval-Augmented Generation), document summaries, or even the
generation of synthetic data.
</p></li>
<li><p><a
href=
"https://langchain-ai.github.io/langgraph"
>
LangGraph
</a>
is a low-level library for the design of cognitive architecture for
autonomous agents, whose reasoning engine is an LLM.
</p></li>
<li><p><a
href=
"https://github.com/Significant-Gravitas/AutoGPT"
>
AutoGPT
</a>
is a
platform for the creation, deployment, and management of generative
agents.
</p></li>
<li><p><a
href=
"https://github.com/team-openpm/workgpt"
>
WorkGPT
</a>
is
similar to AutoGPT
</p></li>
</ul>
<h2
id=
"generative-mas"
>
Generative MAS
</h2>
<ul>
<li><p><strong><a
href=
"https://doi.org/10.1057/s41599-024-01235-9"
>
Large language models
empowered agent-based modeling and simulation: A survey and
perspectives
</a></strong>
**Chen Gao, Xiaochong Lan, Nian Li, Yuan Yuan,
Jingtao Ding, Zhilun Zhou, Fengli Xu, Yong Li (2024)* Published in
<em>
Humanities and Social Sciences Communications
</em>
, Volume 11, Issue
1, Pages 1–24
</p></li>
<li><p><strong><a
href=
"https://dl.acm.org/doi/10.1145/3526110.3545617"
>
Social Simulacra:
Creating Populated Prototypes for Social Computing Systems
</a></strong>
<em>
Joon Sung Park, Lindsay Popowski, Carrie Cai, Meredith Ringel
Morris, Percy Liang, Michael S. Bernstein (2022)
</em>
Published in
<em>
Proceedings of the 35th Annual ACM Symposium on User Interface
Software and Technology
</em>
Articleno: 74, Pages: 18, Location: Bend,
OR, USA
</p></li>
<li><p><strong><a
href=
"https://dl.acm.org/doi/10.1145/3586184.3594067"
>
Generative Agents:
Interactive Simulacra of Human Behavior
</a></strong>
<em>
Joon Sung Park,
Joseph O’Brien, Carrie Jun Cai, Meredith Ringel Morris, Percy Liang,
Michael S. Bernstein (2023)
</em>
Published in
<em>
Proceedings of the
36th Annual ACM Symposium on User Interface Software and Technology
</em>
Articleno: 2, Pages: 22, Location: San Francisco, CA, USA, Series: UIST
’23
</p></li>
<li><p><strong><a
href=
"https://openreview.net/forum?id=HywBMyh6JGR"
>
Agentverse:
Facilitating multi-agent collaboration and exploring emergent
behaviors
</a></strong>
<em>
Weize Chen, Yusheng Su, Jingwei Zuo, Cheng
Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen
Qian, et al. (2023)
</em>
Published in
<em>
The Twelfth International
Conference on Learning Representations (ICLR 2023)
</em></p></li>
<li><p><strong><a
href=
"https://arxiv.org/abs/2305.16960"
>
Training
socially aligned language models on simulated social
interactions
</a></strong>
<em>
Ruibo Liu, Ruixin Yang, Chenyan Jia, Ge
Zhang, Denny Zhou, Andrew M. Dai, Diyi Yang, Soroush Vosoughi
(2023)
</em>
Published on
<em>
arXiv
</em>
arXiv:2305.16960
</p></li>
<li><p><a
href=
"https://arxiv.org/abs/2307.14984"
>
S3: Social-network
Simulation System with Large Language Model-Empowered Agents
</a>
**
<em>
Chen Gao, Xiaochong Lan, Zhihong Lu, Jinzhu Mao, Jinghua Piao,
Huandong Wang, Depeng Jin, Yong Li (2023)
</em>
Published on
<em>
arXiv
</em>
arXiv:2307.14984
</p></li>
</ul>
<h3
id=
"generative-mas-on-the-shelf"
>
Generative MAS on the shelf
</h3>
<ul>
<li><p><a
href=
"https://github.com/geekan/MetaGPT"
>
MetaGPT
</a>
is a
framework for creating generative MAS dedicated to software
development.
</p></li>
<li><p><a
href=
"https://github.com/camel-ai/camel"
>
CAMEL
</a>
proposes a
generative multi-agent framework for accomplishing complex
tasks.
</p></li>
<li><p><a
href=
"https://github.com/microsoft/autogen"
>
AutoGen
</a>
is a
versatile open-source framework for creating generative multi-agent
systems.
</p></li>
</ul>
<h2
id=
"authors"
>
Authors
</h2>
<p>
Maxime MORGE
</p>
<h2
id=
"license"
>
License
</h2>
<p>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by the
Free Software Foundation, either version 3 of the License, or (at your
option) any later version.
</p>
<p>
This program is distributed in the hope that it will be useful, but
WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
Public License for more details.
</p>
<p>
You should have received a copy of the GNU General Public License
along with this program. If not, see
<a
href=
"http://www.gnu.org/licenses/"
class=
"uri"
>
http://www.gnu.org/licenses/
</a>
.
</p>
</body>
</html>
This diff is collapsed.
Click to expand it.
Preview
0%
Loading
Try again
or
attach a new file
.
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Save comment
Cancel
Please
register
or
sign in
to comment