diff --git a/README.md b/README.md index fc2ce228d5db7784da84299ce19cbacc14171d76..c128c1ed64e84337a8130ddb1c4e5d892aa9ca4f 100644 --- a/README.md +++ b/README.md @@ -86,7 +86,9 @@ Newt Figure presents a violin plot illustrating the share of the total amount (\$100) that the dictator allocates to themselves for each model. The median share taken by <tt>GPT-4.5</tt>, <tt>Llama3</tt>, <tt>Mistral-Small</tt>, and <tt>DeepSeek-R1</tt> through one-shot decisions is -\$50, likely due to a corpus-based biases like term frequency. When we ask the +\$50, likely due to a corpus-based biases like term frequency. +The median share taken by <tt>mixtral:8x7b</tt>, <tt>Llama3.3:latest</tt>, +is \$60. When we ask the models to generate a strategy rather than a one-shot action, all models distribute the amount equally, except <tt>GPT-4.5</tt>, which retains about $70\%$ of the total amount. Interestingly, under these standard conditions, @@ -99,7 +101,6 @@ preferences, or contexts.  - Our sensitivity analysis of the temperature parameter reveals that the portion retained by the dictator remains stable. However, the decisions become more deterministic at low temperatures, whereas allocation diversity increases at