Skip to content
GitLab
Explore
Sign in
Primary navigation
Search or go to…
Project
LLM4AAMAS
Manage
Activity
Members
Labels
Plan
Issues
0
Issue boards
Milestones
Wiki
Code
Merge requests
0
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Snippets
Build
Pipelines
Jobs
Pipeline schedules
Artifacts
Deploy
Releases
Package Registry
Model registry
Operate
Environments
Terraform modules
Monitor
Incidents
Analyze
Value stream analytics
Contributor analytics
CI/CD analytics
Repository analytics
Model experiments
Help
Help
Support
GitLab documentation
Compare GitLab plans
Community forum
Contribute to GitLab
Provide feedback
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
Maxime Morge
LLM4AAMAS
Commits
af2c37c0
Commit
af2c37c0
authored
1 week ago
by
Maxime Morge
Browse files
Options
Downloads
Patches
Plain Diff
LLM4AAMAS: Add gurcan24fai
parent
71822faf
No related branches found
No related tags found
No related merge requests found
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
README.md
+19
-9
19 additions, 9 deletions
README.md
with
19 additions
and
9 deletions
README.md
+
19
−
9
View file @
af2c37c0
...
@@ -173,7 +173,7 @@ performance at inference time. Pretraining is crucial for novel tasks requiring
...
@@ -173,7 +173,7 @@ performance at inference time. Pretraining is crucial for novel tasks requiring
new skills, whereas TTS is effective when base models already perform reasonably
new skills, whereas TTS is effective when base models already perform reasonably
well.
well.
-
**[LLM Post-Training: A Deep Dive into Reasoning Large Language Models](https://arxiv.org/abs/2502.21321)**
-
**[LLM Post-Training: A Deep Dive into Reasoning Large Language Models](https://arxiv.org/abs/2502.21321)**
Komal Kumar, Tajamul Ashraf, Omkar Thawakar, Rao Muhammad Anwer, Hisham
Komal Kumar, Tajamul Ashraf, Omkar Thawakar, Rao Muhammad Anwer, Hisham
Cholakkal, Mubarak Shah, Ming-Hsuan Yang, Phillip H. S. Torr, Salman Khan,
Cholakkal, Mubarak Shah, Ming-Hsuan Yang, Phillip H. S. Torr, Salman Khan,
Fahad Shahbaz Khan (2025) on
*arXiv*
(cs.CL).
Fahad Shahbaz Khan (2025) on
*arXiv*
(cs.CL).
...
@@ -400,6 +400,16 @@ objectives, and interact and communicate in natural language.
...
@@ -400,6 +400,16 @@ objectives, and interact and communicate in natural language.
[
repository
](
https://github.com/tsinghua-fib-lab/LLM-Agent-Based-Modeling-and-Simulation
)
.
[
repository
](
https://github.com/tsinghua-fib-lab/LLM-Agent-Based-Modeling-and-Simulation
)
.
An LLM can be viewed as a non-deterministic simulator with the ability to
role-play an endless array of characters.
-
**[LLM-Augmented Agent-Based Modelling for Social Simulations: Challenges and Opportunities](https://ebooks.iospress.nl/volumearticle/68007)**
*Önder Gürcan (2024)*
Published in
*
HHAI 2024: Hybrid Human AI Systems for the
Social Good
*
, Frontiers in Artificial Intelligence and Applications, Volume
386, pages 134–144. DOI:
[
10.3233/FAIA240190
](
https://doi.org/10.3233/FAIA240190
)
Simulacra studies the emergent social behaviors of a generative multi-agent
Simulacra studies the emergent social behaviors of a generative multi-agent
simulation in an environment inspired by The Sims.
simulation in an environment inspired by The Sims.
...
@@ -479,8 +489,8 @@ simple patterns, and may modify refined belief when taking actions.
...
@@ -479,8 +489,8 @@ simple patterns, and may modify refined belief when taking actions.
-
**
[
Can Large Language Models Serve as Rational Players in Game Theory? A
-
**
[
Can Large Language Models Serve as Rational Players in Game Theory? A
Systematic Analysis
](
https://ojs.aaai.org/index.php/AAAI/article/view/29751
)
**
Systematic Analysis
](
https://ojs.aaai.org/index.php/AAAI/article/view/29751
)
**
Caoyun Fan, Jindou Chen, Yaohui Jin, and Hao He (2024)
*Presented at AAAI*
,
Caoyun Fan, Jindou Chen, Yaohui Jin, and Hao He (2024)
*Presented at AAAI*
,
**38**
(16), 17960-17967.
**38**
(16), 17960-17967.
When LLM-based agents participate in various games designed to assess different
When LLM-based agents participate in various games designed to assess different
traits—such as the dictator game (altruism), the ultimatum game (fairness), the
traits—such as the dictator game (altruism), the ultimatum game (fairness), the
trust game (trust, fairness, altruism, and reciprocity), the bomb risk game
trust game (trust, fairness, altruism, and reciprocity), the bomb risk game
...
@@ -546,8 +556,8 @@ behavior in the dictator game, resembling human rates of sharing with other
...
@@ -546,8 +556,8 @@ behavior in the dictator game, resembling human rates of sharing with other
humans. However, the AI agent shared substantially less of its endowment with
humans. However, the AI agent shared substantially less of its endowment with
the human experimenter or an anonymous charity than with other AI agents.
the human experimenter or an anonymous charity than with other AI agents.
-
**
[
Evidence of behavior consistent with self-interest and altruism in an
-
**
[
Evidence of behavior consistent with self-interest and altruism in an
artificially intelligent agent
](
https://arxiv.org/abs/2301.02330
)
**
artificially intelligent agent
](
https://arxiv.org/abs/2301.02330
)
**
Tim Johnson, Nick Obradovich (2023) on
*arXiv*
.
Tim Johnson, Nick Obradovich (2023) on
*arXiv*
.
...
@@ -558,7 +568,7 @@ strategic scenarios, including the Water Allocation Challenge. The findings
...
@@ -558,7 +568,7 @@ strategic scenarios, including the Water Allocation Challenge. The findings
indicate that while LLMs are capable of accounting for long-term outcomes, they
indicate that while LLMs are capable of accounting for long-term outcomes, they
still show limitations in adaptability.
still show limitations in adaptability.
-
**[ALYMPICS: LLM Agents Meet Game Theory](https://aclanthology.org/2025.coling-main.193/)**
-
**[ALYMPICS: LLM Agents Meet Game Theory](https://aclanthology.org/2025.coling-main.193/)**
Shaoguang Mao, Yuzhe Cai, Yan Xia, Wenshan Wu, Xun Wang, Fengyi Wang, Qiang
Shaoguang Mao, Yuzhe Cai, Yan Xia, Wenshan Wu, Xun Wang, Fengyi Wang, Qiang
Guan, Tao Ge, Furu Wei (2025) at
*COLING 2025*
.
Guan, Tao Ge, Furu Wei (2025) at
*COLING 2025*
.
...
@@ -574,9 +584,9 @@ induction, and Bayesian belief updating. Interestingly, the model that initially
...
@@ -574,9 +584,9 @@ induction, and Bayesian belief updating. Interestingly, the model that initially
performed the worst without the workflow becomes the best with it — and vice
performed the worst without the workflow becomes the best with it — and vice
versa.
versa.
-
**[Game-theoretic LLM: Agent Workflow for Negotiation Games](https://arxiv.org/abs/2411.05990)**
-
**[Game-theoretic LLM: Agent Workflow for Negotiation Games](https://arxiv.org/abs/2411.05990)**
Wenyue Hua, Ollie Liu, Lingyao Li, Alfonso Amayuelas, Julie Chen, Lucas Jiang, Mingyu Jin, Lizhou Fan,
Wenyue Hua, Ollie Liu, Lingyao Li, Alfonso Amayuelas, Julie Chen, Lucas Jiang, Mingyu Jin, Lizhou Fan,
Fei Sun, William Wang, Xintong Wang, Yongfeng Zhang (2024) on
*arXiv*
.
Fei Sun, William Wang, Xintong Wang, Yongfeng Zhang (2024) on
*arXiv*
.
### Generative MAS on the shelf
### Generative MAS on the shelf
...
...
This diff is collapsed.
Click to expand it.
Preview
0%
Loading
Try again
or
attach a new file
.
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Save comment
Cancel
Please
register
or
sign in
to comment