Those prompts are so irritating and so frequent that I’ve taken to just quickly picking whichever one looks worse at a cursory glance. I’m paying them, they shouldn’t expect high quality work from me.
You know there's no such as base truth here? You want to write something like this to start your prompts, "Respond in English, using standard capitalization and punctuation, following rules of grammar as written by Strunk & White, where numbers are represented using arabic numerals in base 10 notation...."???
A lot of preferences have nothing to do with any truth. Do you like code segments or full code? Do you like paragraphs or bullet points? Heck, do you want English or Japanaese?
I think my awareness that this may influence future responses has actually been detrimental to my response rate. The responses are often so similar that I can imagine preferring either in specific circumstances. While I’m sure that can be guided by the prompt, I’m often hesitant to click on a specific response as I can see the value of the other response in a different situation and I don’t want to bias the future responses. Maybe with more specific prompting this wouldn’t be such an issue, or maybe more of an understanding of how inter-chat personalisation is applied (maybe I’m missing some information on this too).
I know for a fact that as of yesterday I did not have to pick one to continue the conversation. It just maximizes the second choice and displayed a 2/2 below the response.
Why not always pick the one on the left, for example? I understand wanting to speed through and not spend time doing labor for OpenAI, but it seems counter-productive to spend any time feeding it false information.
My assumption is they measure the quality of user feedback, either on a per user basis or in an aggregate. I want them to interrupt me less, so I want them to either decide I’m a bad teacher or that users in general are bad teachers.