diff --git a/README.md b/README.md index ee0d2f4267acc80ef74f1398613bc855aa57ca1c..ef7736f53d1825901d0bbf2fb2e3da3250ee6c5a 100644 --- a/README.md +++ b/README.md @@ -78,8 +78,9 @@ to generative AAMAS. This list is a work in progress and will be regularly updat ## Existing LLMs -Many models are available at the following URLs: -[https://www.nomic.ai/gpt4all](https://www.nomic.ai/gpt4all) and +Many models are available at the following URLs: +[https://ollama.com](https://ollama.com), +[https://www.nomic.ai/gpt4all](https://www.nomic.ai/gpt4all) and [https://huggingface.co/models](https://huggingface.co/models). - **[GPT-4 Technical Report](https://arxiv.org/abs/2303.08774)** @@ -94,20 +95,20 @@ Many models are available at the following URLs: *Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, et al. (2023)* Published on *GitHub* -- **[Gemma 2: Improving Open Language Models at a Practical Size](https://arxiv.org/abs/2408.00118)** - *Google AI Team (2024)* +- **[Gemma 2: Improving Open Language Models at a Practical Size](https://arxiv.org/abs/2408.00118)** + *Google AI Team (2024)* Published on *arXiv* -- **[DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning](https://arxiv.org/abs/2501.12948)** - *DeepSeek-AI (2025)* +- **[DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning](https://arxiv.org/abs/2501.12948)** + *DeepSeek-AI (2025)* Published on *arXiv* -- **[Mixtral of Experts](https://arxiv.org/abs/2401.04088)** - *Mistral AI team (2024)* +- **[Mixtral of Experts](https://arxiv.org/abs/2401.04088)** + *Mistral AI team (2024)* Published on *arXiv* -- **[Mistral 7B](https://arxiv.org/abs/2310.06825)** - *Mistral AI team (2023)* +- **[Mistral 7B](https://arxiv.org/abs/2310.06825)** + *Mistral AI team (2023)* Published on *arXiv* - **[The Lucie-7B LLM and the Lucie Training Dataset: Open Resources for @@ -161,8 +162,8 @@ Many models are available at the following URLs: *Timo Schick, Jane Dwivedi-Yu, Roberto Dessi, Roberta Raileanu, et al. (2023)* Presented at *NeurIPS* -- **[Cognitive Architectures for Language Agents](https://arxiv.org/abs/2309.02427)** - *Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths (2024)* +- **[Cognitive Architectures for Language Agents](https://arxiv.org/abs/2309.02427)** + *Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths (2024)* Published on *arXiv* @@ -234,7 +235,7 @@ Many models are available at the following URLs: - [AutoGen](https://github.com/microsoft/autogen) is a versatile open-source framework for creating generative multi-agent systems. - + ## Authors Maxime MORGE