> Anything that isn't in the training data set will not be produced as an output.
Thinking back to some of the examples I have seen, it feels as though this is not so. In particular, I'm thinking of a series where ChatGPT was prompted to invent a toy language with a simple grammar and then translate sentences between that language and English [1]. It seems implausible that the outputs produced here were in the training data set.
Furthermore, given that LLMs are probabilistic models and produce their output stochastically, it does not seem surprising that they might produce output not in their training data.
I agree that we do not have a good understanding of what reasoning, thought, or awareness is. One of the questions I have been pondering lately is whether, when a possible solution or answer to some question pops into my mind, it is the result of an unconscious LLM-like process, but that can't be all of it; for one thing, I - unlike LLMs - have a limited ability to assess my ideas for plausibility without being prompted to do so.
Thinking back to some of the examples I have seen, it feels as though this is not so. In particular, I'm thinking of a series where ChatGPT was prompted to invent a toy language with a simple grammar and then translate sentences between that language and English [1]. It seems implausible that the outputs produced here were in the training data set.
Furthermore, given that LLMs are probabilistic models and produce their output stochastically, it does not seem surprising that they might produce output not in their training data.
I agree that we do not have a good understanding of what reasoning, thought, or awareness is. One of the questions I have been pondering lately is whether, when a possible solution or answer to some question pops into my mind, it is the result of an unconscious LLM-like process, but that can't be all of it; for one thing, I - unlike LLMs - have a limited ability to assess my ideas for plausibility without being prompted to do so.
[1] https://maximumeffort.substack.com/p/i-taught-chatgpt-to-inv...