From 5eff221493701e5799f15bb15f9458d33690bd1e Mon Sep 17 00:00:00 2001 From: Maxime MORGE <maxime.morge@univ-lille.fr> Date: Wed, 19 Feb 2025 16:41:26 +0100 Subject: [PATCH] LLM4AAMAS: Add cao23arxiv --- README.md | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/README.md b/README.md index 74ca36c..a450893 100644 --- a/README.md +++ b/README.md @@ -52,6 +52,16 @@ to generative AAMAS. This list is a work in progress and will be regularly updat **[A Survey of Large Language Models](https://arxiv.org/abs/2303.18223)** Wayne Xin Zhao, Kun Zhou, Junyi Li, et al. (2024) Published on *arXiv* +- AI-generated content typically follows two steps: extracting intent from human + instructions and generating content accordingly. Unimodal models process + instructions and output in the same modality, while multimodal models handle + cross-modal inputs and produce outputs in different modalities. + + **[A Comprehensive Survey of AI-Generated Content (AIGC): A History of + Generative AI from GAN to ChatGPT](https://arxiv.org/abs/2303.04226)** Yihan + Cao, Siyu Li, Yixin Liu, Zhiling Yan, Yutong Dai, Philip S. Yu, Lichao Sun + (2023) Published on arXiv + - A framework for achieving strong natural language understanding with a single task-agnostic model through generative pre-training and discriminative fine-tuning. -- GitLab