> This leads to output that we would classify as "very boring".
Not really. I set temperature to 0 for my local models, it works fine.
The reason why the cloud UIs don't allow a temperature of 0 is because then models sometimes start to do infinite loops of tokens, and that would break the suspension of disbelief if the public saw it.
You must be using recent (or just different) models than those I tried. Mine returned garbage easily at temperature 0. (But unfortunately, I cannot try and report from there.)
This (LLM behaviour and benchmarking at low or 0 temperature value) should be a topic to investigate.
Not really. I set temperature to 0 for my local models, it works fine.
The reason why the cloud UIs don't allow a temperature of 0 is because then models sometimes start to do infinite loops of tokens, and that would break the suspension of disbelief if the public saw it.