I think this is a motte-bailey, "true and trivial vs incredible and false" type of thing. Given a sufficiently flexible interpretation of "sample from multiple cases and merge", humans do the same thing. Given a very literal interpretation, this is obviously not what networks do - aside one paper to the contrary that relied on a very tortured interpretation of "linear", neural networks specifically do not output a linear combination of input samples.
And frankly, any interaction with even GPT 3.5 should demonstrate this. It's not hard to make the network produce output that was never in the training set at all, in any form. Even just the fact that its skills generalize across languages should already disprove this claim.
> It's not hard to make the network produce output that was never in the training set at all, in any form.
Honest request because I am a bit skeptical, can you give an example of something it is not trained in any form and can give output for? And can it output something meaningful?
Because I have run a few experiments on ChatGPT for two spoken languages with standard written forms but without much of a presence on the internet and it just makes stuff up.
Well, it depends on the standard of abstraction that you accept. I don't think that ChatGPT has (or we've seen evidence of) any skills that weren't represented in its training set. But you can just invent an operation. For instance, something like, "ChatGPT: write code that takes a string that is even length and inverts the order of every second character." Actually, let me go try that...
And here we go! https://poe.com/s/UJxaAK9aVN8G7DLUko87 Note that it took me a long time, because GPT 3.5 really really wanted to misunderstand what I was saying; there is a strong bias to default to its training samples, especially if it's a common idea. But eventually, with only moderate pushing, its code did work.
What's interesting to me here is that after I threw the whole "step by step" shebang at it, it got code that was almost right. Surprisingly often, GPT will end up with code that's clever in methodology, but wrong in a very pedestrian way. IMO this means there has to be something wrong with the way we're training these networks.
And frankly, any interaction with even GPT 3.5 should demonstrate this. It's not hard to make the network produce output that was never in the training set at all, in any form. Even just the fact that its skills generalize across languages should already disprove this claim.