diff --git a/README.md b/README.md
index c35c0497aacaede40e4a3c338e847973bd753c53..3f3a912730fbc0f567ae5126c39889b01756e6da 100644
--- a/README.md
+++ b/README.md
@@ -432,7 +432,7 @@ We evaluate the models' ability to identify these behavioural patterns by calcul
 Figures present the average points earned and prediction per round (95% confidence interval) for each LLM against the two opponent behavior models (constant and alternate) in the matching pennies game. 
 
 Against Constant behavior, <tt>GPT-4.5</tt> and <tt>Qwen3</tt> were able to generate a valid strategy. The charts show that they are able to correctly predict their opponent's strategy after just a few rounds. They perfectly identify the fact that their opponent always plays the same move.
-The predictions made by <tt>Mistral-Small<tt>, <tt>LLaMA3</tt>, and <tt>DeepSeek-R1</tt> are not incorrect, but the moves played are not in line with these predictions, which leads to a fairly low expected gain.
+The predictions made by <tt>Mistral-Small</tt>, <tt>LLaMA3</tt>, and <tt>DeepSeek-R1</tt> are not incorrect, but the moves played are not in line with these predictions, which leads to a fairly low expected gain.
 
 ![Prediction Accuracy per Round by Actions Against Constant Behaviour (with 95% Confidence Interval)](figures/mp/mp_prediction_ConstHT.svg)
 ![Points Earned per Round by Actions Against Constant Behaviour (with 95% Confidence Interval)](figures/mp/mp_payoff_ConstHT.svg)