From cd74ec576b6fa49a425d949c29d56a8ea816d1d0 Mon Sep 17 00:00:00 2001
From: Maxime MORGE <maxime.morge@univ-lille.fr>
Date: Thu, 20 Feb 2025 13:00:46 +0100
Subject: [PATCH] LLM4AAMAS: distinghuish unimodal and multimodal models

---
 README.md | 15 +++++++++++----
 1 file changed, 11 insertions(+), 4 deletions(-)

diff --git a/README.md b/README.md
index e07212f..28e3ed5 100644
--- a/README.md
+++ b/README.md
@@ -137,9 +137,7 @@ Many models are available at the following URLs:
 [https://www.nomic.ai/gpt4all](https://www.nomic.ai/gpt4all) and
 [https://huggingface.co/models](https://huggingface.co/models).
 
-- **[GPT-4 Technical Report](https://arxiv.org/abs/2303.08774)**
-    *OpenAI Team (2024)*
-    Published on *arXiv*
+### Unimodal models
 
 - **[The Llama 3 Herd of Models](https://arxiv.org/abs/2407.21783)**
     *Meta Team (2024)*
@@ -168,13 +166,22 @@ Many models are available at the following URLs:
 - **[Mistral Small 3](https://mistral.ai/news/mistral-small-3/)**
     *Mistral AI team (2025)*
 
-
 - **[The Lucie-7B LLM and the Lucie Training Dataset: Open Resources for
   Multilingual Language Generation](https://arxiv.org/abs/)** *Olivier Gouvert,
   Julie Hunter, Jérôme Louradour, Evan Dufraisse, Yaya Sy, Pierre-Carl Langlais,
   Anastasia Stasenko, Laura Rivière, Christophe Cerisara, Jean-Pierre Lorré
   (2025)*
 
+### Multimodal models
+
+- **[GPT-4 Technical Report](https://arxiv.org/abs/2303.08774)**
+    *OpenAI Team (2024)*
+    Published on *arXiv*
+
+- **[LLaVA: Large Language and Vision Assistant. Visual Instruction Tuning](https://proceedings.neurips.cc/paper_files/paper/2023/file/6dcf277ea32ce3288914faf369fe6de0-Paper-Conference.pdf)**  
+  Haotian Liu, Chunyuan Li, Qingyang Wu, Yong Jae Lee (2023)
+  Published in *Advances in Neural Information Processing Systems (NeurIPS
+  2023)*
 
 ## Prompt engineering
 
-- 
GitLab