Skip to content
Snippets Groups Projects
user avatar
Maxime MORGE authored
5eff2214
Name Last commit Last update
LICENSE.md
README.md

LLM4AAMAS

Generative Autonomous Agents and Multi-Agent Systems (AAMAS) offer promising opportunities for solving problems in open environments and simulating complex social dynamics.

This repository contains a collection of papers and ressources related to generative AAMAS. This list is a work in progress and will be regularly updated with new resources.

Artificial Intelligence

Neural networks (RNN, Transformers)

  • The back-propagation method adjusts the connection weights by propagating errors backward from the output layer to the input layer, aiming to minimize errors and achieve a classification as close as possible to the optimum.

    Learning representations by back-propagating errors David E. Rumelhart, Geoffrey E. Hinton, Ronald J. Williams (1986) Published in Nature

  • This approach has halved the image classification error rate on the ImageNet dataset.

    ImageNet Classification with Deep Convolutional Neural Networks Alex Krizhevsky, Ilya Sutskever, Geoffrey E. Hinton (2012) Presented at NeurIPS

Large Language Models

Tuning

Instruction tuning

  • The fine-tuning of a pre-trained language model requires significantly fewer data and computational resources, especially when parameter-efficient approaches such as Low-Rank Adaptation (LoRA) are used.

    LoRA: Low-Rank Adaptation of Large Language Models Edward J. Hu, Yelong Shen, Phillip Wallis, et al. (2021)* Published on arXiv

  • The apparent mastery of textual understanding by LLMs closely resembles human performance.

    Language Models are Few-Shot Learners Tom Brown, Benjamin Mann, Nick Ryder, et al. (2020)* Presented at NeurIPS

Alignement tuning

Existing LLMs

Many models are available at the following URLs: https://ollama.com, https://www.nomic.ai/gpt4all and https://huggingface.co/models.

Prompt engineering

ICL

In-context learning involves providing the model with specific information without requiring additional training.

  • A Survey on In-context Learning Qingxiu Dong, Lei Li, Damai Dai, Ce Zheng, Jingyuan Ma, Rui Li, Heming Xia, Jingjing Xu, Zhiyong Wu, Baobao Chang, Xu Sun, Lei Li, Zhifang Sui (2024) Presented at the Conference on Empirical Methods in Natural Language Processing (EMNLP) Location: Miami, Florida, USA Published by: Association for Computational Linguistics

CoT

Chain-of-thought is a prompting strategy that, instead of being limited to input-output pairs, incorporates intermediate reasoning steps that serve as a link between the inputs and the output.

RAG

Retrieval-Augmented Generation (RAG) is a prompting strategy that involves integrating relevant information from external data sources into the instructions to enhance the model’s responses using specific and/or recent knowledge.

Generative Autonomous Agents

Leveraging the commonsense knowledge integrated into LLMs represents a promising solution to equip autonomous agents with the capabilities necessary to adapt to new tasks, while reducing reliance on knowledge engineering or trial-and-error learning.

  • A Survey on Large Language Model Based Autonomous Agents Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, Zhiyuan Chen, Jiakai Tang, Xu Chen, Yankai Lin, Wayne Xin Zhao, Zhewei Wei, Jirong Wen (2024) Published in Frontiers of Computer Science (Volume 18, Issue 6, Pages 186345) Publisher: Springer

Multiple works aim to equip LLMs with the ability to use external tools, such as a calculator, a calendar, a DBMS, a code interpreter, a search engine, a machine translation tool, a question-answering system, or an AI tool.

To react autonomously in an environment, a generative agent must interpret its perceptions (e.g., a user request) based on the knowledge stored in its memory, reason, and plan actions. It must execute the plan step by step with the help of tools and refine the plan based on feedback from the environment.

LLMs have the ability to emulate a real human in certain experiments in experimental economics or social psychology.

LLMs, notably GPT-4 using ToT prompt, can simulate simple auction experiments in line with theoretical expectations

Generative consultants as economic agent with limited agency.

AGENTBENCH is a systematically designed multi-dimensional evolving benchmark for evaluating LLMs as agents which measures a significant performance gap between these top-tier models and their OSS competitors.

Generative Autonomous Agents on the shelf

  • LangChain is an open-source framework for designing prompts for LLMs. It can be used to define high-level reasoning sequences, conversational agents, RAGs (Retrieval-Augmented Generation), document summaries, or even the generation of synthetic data.

  • LangGraph is a low-level library for the design of cognitive architecture for autonomous agents, whose reasoning engine is an LLM.

  • AutoGPT is a platform for the creation, deployment, and management of generative agents.

  • WorkGPT is similar to AutoGPT

Generative MAS

Based on the planning and reasoning abilities of LLM, the paper considers LLM-based multi-agent systems for complex problem-solving and world simulation.

LLMs can simulate realistic perceptions, reasoning, and decision-making, react adaptively to environments without predefined explicit instructions by adjusting their responses through contextual learning mechanisms, autonomously generate objectives, and interact and communicate in natural language.

Simulacra studies the emergent social behaviors of a generative multi-agent simulation in an environment inspired by The Sims.

AGENTVERSE is a general multi-agent framework that simulates problem-solving procedures of human groups.

An open-source platform to simulate a human society.

A simulation of the propagation processes in a social network.

Generative MAS on the shelf

  • MetaGPT is a framework for creating generative MAS dedicated to software development.

  • ChatDev is a framework for creating multi-agent collaboration networks for software development.

  • CAMEL proposes a generative multi-agent framework for accomplishing complex tasks.

  • Swarm is framework for building asynchronous generative multi-agent systems.

  • AutoGen is a versatile open-source framework for creating generative multi-agent systems.

  • Magentic-One is a multi-agent architecture built on AutoGen where a lead Orchestrator agent is responsible for high-level planning, directing other agents and tracking task progress.

  • CrewAI combines LLM-based agent with precise control flow.

  • Agno is a lightweight framework for building generative multi-agent systems with workflows.

  • Bee Agent Framework for ramework for building, deploying generative multi-agent workflows managing and executing structured sequences of tasks.

Authors

Maxime MORGE

License

This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.

This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.

You should have received a copy of the GNU General Public License along with this program. If not, see http://www.gnu.org/licenses/.