From 0d396f8a8608a374d08e55e31be24596739ff62e Mon Sep 17 00:00:00 2001
From: Tetiana Yemelianenko <tyemel.mzeom@gmail.com>
Date: Tue, 12 Nov 2024 14:56:52 +0000
Subject: [PATCH] Update README.md

---
 README.md | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/README.md b/README.md
index 62070e8..d68382d 100644
--- a/README.md
+++ b/README.md
@@ -23,10 +23,10 @@ The code for fine-tuning a classification model is based on the <a href="https:/
 
 ## Dataset
 
-The WikiArt dataset used for fine-tuning could be found <a href="https://huggingface.co/datasets/huggan/wikiart">here</a>
+The WikiArt dataset used for fine-tuning could be found <a href="https://huggingface.co/datasets/huggan/wikiart">here</a>. Before fine-tunibg, dataset should be downloaded and filtered for excluding 'Unknown genre' class for genre classification fine-tuning and for selecting artists from the list of most popular artists for artist classification fine-tuning. For the style classification fine-tuning original dataset could be used.
 
 ## Steps
-To reproduce the steps first you need to finetune models for genre, style and artist classification on WikiArt dataset or used fine-tuned adapters. Then, using fine-tuned models you need to calculate embeddings for the images from WikiArt dataset and create ANNOY indexes. Due to the relatively big size, these files are available upon a request.
+To reproduce the steps first you need to finetune models for genre, style and artist classification on downloaded WikiArt dataset or use fine-tuned adapters (folder LoRA adapters). Then, using fine-tuned models you need to calculate embeddings for the images from WikiArt dataset and create ANNOY indexes for each fine-tuned model. Due to the relatively big size, these files (embeddings and ANNOY indexes) are available upon a request.
 
 ## Citation
 ```
-- 
GitLab