diff --git a/README.md b/README.md
index 74ca36c0b9f5dd2b0ab3c5a59172b9f3acbed79d..a45089344772e5fdd8816e31d013e2d2722d1d78 100644
--- a/README.md
+++ b/README.md
@@ -52,6 +52,16 @@ to generative AAMAS. This list is a work in progress and will be regularly updat
     **[A Survey of Large Language Models](https://arxiv.org/abs/2303.18223)**
     Wayne Xin Zhao, Kun Zhou, Junyi Li, et al. (2024) Published on *arXiv*
 
+- AI-generated content typically follows two steps: extracting intent from human
+  instructions and generating content accordingly. Unimodal models process
+  instructions and output in the same modality, while multimodal models handle
+  cross-modal inputs and produce outputs in different modalities.
+
+    **[A Comprehensive Survey of AI-Generated Content (AIGC): A History of
+    Generative AI from GAN to ChatGPT](https://arxiv.org/abs/2303.04226)** Yihan
+    Cao, Siyu Li, Yixin Liu, Zhiling Yan, Yutong Dai, Philip S. Yu, Lichao Sun
+    (2023) Published on arXiv
+
 - A framework for achieving strong natural language understanding with a single
   task-agnostic model through generative pre-training and discriminative
   fine-tuning.