And an interesting side effect I noticed with ChatGPT4o that the quality of output increases it you insult it after prior mistakes. It is as if it tries harder if it perceives the user to be seriously pissed off.
The same doesn't work on Claude Opus for example. The best course of action is to calmly explain the mistakes and give it some actual working examples. I wonder what this tells us about the datasets used to train these models.
The same doesn't work on Claude Opus for example. The best course of action is to calmly explain the mistakes and give it some actual working examples. I wonder what this tells us about the datasets used to train these models.