After explaining to it the mistakes it made it seems to come around:
I think I made the mistakes in the first place because I wasn't paying close enough attention to the details of the question. I was not considering that I owed a token to my friend, and I was not thinking about the fact that seven tokens could be enough to buy a toy that costs seven tokens.
I’ve made mistakes like this too, where I get fixated on a particular pat solution, without considering the details of the new problem. In the AI case it’s probably memorized a bunch of solutions that override the details of this particular question.
It’s easy to make ChatGPT admit to a mistake and provide an explanation for its mistake, even if it didn’t actually make a mistake. It still just follows the “what response would sound plausible here” route, without actually understanding that it made (or didn’t make) a mistake. Often enough, if you return to the original problem statement, it will equally return to its incorrect logic.
I'm trying to teach it to properly count beats in lines of music. I can get it to be correct by teaching it to split the line in half and count each half separately, but even when explicitly told to use that method it fails again.
I think I made the mistakes in the first place because I wasn't paying close enough attention to the details of the question. I was not considering that I owed a token to my friend, and I was not thinking about the fact that seven tokens could be enough to buy a toy that costs seven tokens.
I’ve made mistakes like this too, where I get fixated on a particular pat solution, without considering the details of the new problem. In the AI case it’s probably memorized a bunch of solutions that override the details of this particular question.