diff --git a/notebooks/Confusion Matrices.ipynb b/notebooks/Confusion Matrices.ipynb
new file mode 100644
index 0000000000000000000000000000000000000000..d7a11c5ff28844e46f6ac7f034354e2d04f52b8e
--- /dev/null
+++ b/notebooks/Confusion Matrices.ipynb	
@@ -0,0 +1,115 @@
+{
+ "cells": [
+  {
+   "cell_type": "markdown",
+   "id": "11511929",
+   "metadata": {},
+   "source": [
+    "# Confusion Matrices\n",
+    "\n",
+    "We start by including the EDdA modules from the [project's gitlab](https://gitlab.liris.cnrs.fr/geode/EDdA-Classification)."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "id": "a5f3d434",
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "from EDdA import data\n",
+    "from EDdA.classification import confusionMatrix, metrics, toPNG, topNGrams\n",
+    "import os"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "id": "4c3064ea",
+   "metadata": {},
+   "source": [
+    "Then we load the training set into a new data structure called a `Source`, which contains a `pandas` `Dataframe` and a hash computed from the list of exact articles \"coordinates\" (volume and article number, and their order matters) contained in the original tsv file."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 5,
+   "id": "5ad65685",
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "source = data.load('training_set')"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "id": "4e958e04",
+   "metadata": {},
+   "source": [
+    "This function rationalises the name of the files containing the confusion matrices to produce."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 10,
+   "id": "545bdb4f",
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "def preparePath(root, source, n, ranks, metricName):\n",
+    "    path = \"{root}/confusionMatrix/{inputHash}/{n}grams_top{ranks}_{name}.png\".format(\n",
+    "            root=root,\n",
+    "            inputHash=source.hash,\n",
+    "            n=n,\n",
+    "            ranks=ranks,\n",
+    "            name=metricName\n",
+    "        )\n",
+    "    os.makedirs(os.path.dirname(path), exist_ok=True)\n",
+    "    return path"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "id": "4079559f",
+   "metadata": {},
+   "source": [
+    "Then we only have to loop on the n-gram size (`n`), the number of `ranks` to keep when computing the most frequent ones and the comparison method (the metrics' `name`)."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "id": "b39c5be0",
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "for n in range(1,4):\n",
+    "    for ranks in [10, 50, 100]:\n",
+    "        vectorizer = topNGrams(source, n, ranks)\n",
+    "        for name in ['colinearity', 'keysIntersection']:\n",
+    "            imagePath = preparePath('.', source, n, ranks, name)\n",
+    "            toPNG(confusionMatrix(vectorizer, metrics[name]), imagePath)"
+   ]
+  }
+ ],
+ "metadata": {
+  "kernelspec": {
+   "display_name": "Python 3",
+   "language": "/gnu/store/2rpsj69fzmcnafz4rml0blrynfayxqzr-python-wrapper-3.9.9/bin/python",
+   "name": "python3"
+  },
+  "language_info": {
+   "codemirror_mode": {
+    "name": "ipython",
+    "version": 3
+   },
+   "file_extension": ".py",
+   "mimetype": "text/x-python",
+   "name": "python",
+   "nbconvert_exporter": "python",
+   "pygments_lexer": "ipython3",
+   "version": "3.9.9"
+  }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}