From 41b4d52f4950d9fddd5791078f6264622447b09f Mon Sep 17 00:00:00 2001
From: Maxime MORGE <maxime.morge@univ-lille.fr>
Date: Thu, 20 Feb 2025 15:30:08 +0100
Subject: [PATCH] LLM4AAMAS: Add Huang23arxiv

---
 README.md | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/README.md b/README.md
index 28e3ed5..e6292ed 100644
--- a/README.md
+++ b/README.md
@@ -207,6 +207,15 @@ link between the inputs and the output.
   *Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, et al. (2022)*
   Presented at *NeurIPS*
 
+Even if CoT prompts has been shown to improve the performance of LLMs on various
+reasoning tasks, it remains unclear to what extent they are capable of true
+reasoning or whether they are simply using memorized patterns and heuristics to
+solve problems.
+
+- **[Towards Reasoning in Large Language Models: A
+  Survey](https://arxiv.org/abs/2212.10403)** Jie Huang and Kevin Chen-Chuan
+  Chang (2023)* Published on *arXiv*
+
 ### RAG
 
 Retrieval-Augmented Generation (RAG) is a prompting strategy that involves
-- 
GitLab