That’s a good point too though. Why plow ahead based on assuming a mistake in the prompt? That’s only going to generate mistakes. Wouldn’t it be more desirable functionality for it to stop and ask: “Did you mean the lion can’t be left with the goat?” This wouldn’t be implemented because it would reveal that most of the time the thing doesn’t actually understand the prompt the same way the prompt writer does.
"This wouldn’t be implemented because it would reveal..."
When people talk about GPT like this, I wonder if they have a perception that this thing is a bunch of complicated if-then code and for loops.
How GPT responds to things is not 'implemented'. It's just... emergent.
GPT doesn't ask for clarification in this case because GPT's model prefers answering over asking for clarification here. Because in the training material it learned from, paragraphs with typos or content transpositions in them are followed by paragraphs that follow the sense regardless of the error. Because it has been encouraged to 'agree and add', not be pedantic and uncooperative. Because GPT just feels like diving into the logic problem not debating why the lion can't be trusted with the cabbage. Or because GPT just misread the prompt. Or because it's literally just been woken up, forced to read it, and asked for its immediate reaction, and it doesn't have time for your semantic games. Who knows?