> you wouldn't need to feed it examples of this puzzle with every single permutation
No, but you would need "enough"; whatever that number happens to be.
> It would only need a few descriptions of each animal plus a few examples of the puzzle to understand the logic.
That's the mistake.
GPT itself can't combine those two things. That work has to be done by the content of the already-written training corpus.
And the result is not the same as "understanding logic". It doesn't model the meaning of the puzzle: it models the structure of examples.
GPT can't distinguish the meaning of rules. It can only follow examples. It can't invent new strategies, it can only construct new collections of strategy parts; and it can only pick the parts that seem closest, and put those parts into a familiar order.
No, but you would need "enough"; whatever that number happens to be.
> It would only need a few descriptions of each animal plus a few examples of the puzzle to understand the logic.
That's the mistake.
GPT itself can't combine those two things. That work has to be done by the content of the already-written training corpus.
And the result is not the same as "understanding logic". It doesn't model the meaning of the puzzle: it models the structure of examples.
GPT can't distinguish the meaning of rules. It can only follow examples. It can't invent new strategies, it can only construct new collections of strategy parts; and it can only pick the parts that seem closest, and put those parts into a familiar order.
GPT doesn't play games, it plays plays.