Yeah, that's true. I feel this is still aligned with the above explanation, though. It attempts to complete the prompt as well as possible. If the prompt is itself inconsistent then the distribution over completions can, in some sense, be anything.
Except, GPT is smarter than that. Even an inconsistent prompt is still more likely to have some kind of nonsense in the same vein as the asking.
Except, GPT is smarter than that. Even an inconsistent prompt is still more likely to have some kind of nonsense in the same vein as the asking.