Are you saying that GPT is not a stochastic parrot, or that GPT is not returning coherent reasoning?
Because if it's the latter, the evidence is rather against you. People seem to like to cherry-pick examples of where GPT gets reasoning wrong, but it's getting it right enough millions of times a day that people keep using it.
And it's not as if humans don't get reasoning wrong. In fact the humans who say GPT can't reason are demonstrating that.
I keep getting surprised at how a large chunk of HN's demographic seemingly struggles with the simple notion that a black box's interface informs surprisingly little about its content.
I'm not saying that GPT-4 is reasoning or not, just that discounting the possibility solely based on it interfacing to the world via a stochastic parrot makes no sense to me.
Isn't "reasoning" a functional property though? If from the outside it performs all the functions of reasoning, it doesn't matter what is happening inside of the black box.
Here's a silly example I thought of. We can ask whether a certain bird is capable of "sorting". We can place objects of different sizes in front of the bird, and we observe that the bird can rearrange them in order of increasing size. Does it matter what internal heuristics or processes the bird is using? If it sorts the objects, it is "sorting".
To me, it seems perfectly obvious that GPT-4 is reasoning. It's not very good at it and it frequently makes mistakes. But it's also frequently able to make correct logical deductions. To me this is all stupid semantic games and goalpost-moving.
> Isn't "reasoning" a functional property though? If from the outside it performs all the functions of reasoning, it doesn't matter what is happening inside of the black box.