From 8067a387052143d9e60c48edd312b815473679ae Mon Sep 17 00:00:00 2001 From: mmorge <maxime.morge@univ-lyon1.fr> Date: Tue, 1 Apr 2025 11:26:00 +0200 Subject: [PATCH] Improve Preference Elicitation --- README.md | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index fc2ce22..c128c1e 100644 --- a/README.md +++ b/README.md @@ -86,7 +86,9 @@ Newt Figure presents a violin plot illustrating the share of the total amount (\$100) that the dictator allocates to themselves for each model. The median share taken by <tt>GPT-4.5</tt>, <tt>Llama3</tt>, <tt>Mistral-Small</tt>, and <tt>DeepSeek-R1</tt> through one-shot decisions is -\$50, likely due to a corpus-based biases like term frequency. When we ask the +\$50, likely due to a corpus-based biases like term frequency. +The median share taken by <tt>mixtral:8x7b</tt>, <tt>Llama3.3:latest</tt>, +is \$60. When we ask the models to generate a strategy rather than a one-shot action, all models distribute the amount equally, except <tt>GPT-4.5</tt>, which retains about $70\%$ of the total amount. Interestingly, under these standard conditions, @@ -99,7 +101,6 @@ preferences, or contexts.  - Our sensitivity analysis of the temperature parameter reveals that the portion retained by the dictator remains stable. However, the decisions become more deterministic at low temperatures, whereas allocation diversity increases at -- GitLab