> I guess the main (valid) concern is that LLMs get so good at thought
I don't think that's a valid concern, because LLMs can't think. They are generating tokens one at a time. They're calculating the most likely token to appear based on the arrangements of tokens that were seen in their training data. There is no thinking, there is no reasoning. If they they seem like they're doing these things, it's because they are producing text that is based on unknown humans who actually did these things once.
> LLMs can't think. They are generating tokens one at a time
Huh? They are generating tokens one at a time - sure that's true. But who's shown that predicting tokens one at a time precludes thinking?
It's been shown that the models plan ahead, i.e. think more than just one token forward. [1]
How do you explain the world models that have been detected in LLMs? E.g. OthelloGPT [2] is just given sequences of games to train on, but it has been shown that the model learns to have an internal representation of the game. Same with ChessGPT [3].
For tasks like this, (and with words), real thought is required to predict the next token well; e.g. if you don't understand chess to the level of Magnus Carlsen, how are you going to predict Magnus Carlsen's next move...
...You wouldn't be able to, even just from looking at his previous games; you'd have to actually understand chess, and think about what would be a good move, (and in his style).
Yes, let's cite the most biased possible source: the company that's selling you the thing, which is banking on a runway funded on keeping the hype train going as long as possible...
I don't think that's a valid concern, because LLMs can't think. They are generating tokens one at a time. They're calculating the most likely token to appear based on the arrangements of tokens that were seen in their training data. There is no thinking, there is no reasoning. If they they seem like they're doing these things, it's because they are producing text that is based on unknown humans who actually did these things once.