Hacker News new | past | comments | ask | show | jobs | submit login

>How does that human failure to reason affect the evaluation of GPT's reasoning capabilities?

Isn't that your argument? That not knowing how humans reason means that we can't say that GPT isn't reasoning?




Part of what I'm pointing out is that not knowing how humans reason means that we can't say that some particular mechanism - like LLMs - is not capable, in principle, of reasoning. There seems to be a strong tendency to dismiss what LLMs are doing based on the idea that they're "stochastic parrots" or whatever. But we don't actually know that we're not "just" stochastic parrots.

Of course, there's the possibly thorny question of consciousness, but we actually have no reason to believe that that's required for reasoning or intelligence.


>> But we don't actually know that we're not "just" stochastic parrots.

Sorry to barge in, but we have a fairly good reason to believe we are at the very least not _just_ stochastic parrots: all of mathematics, which is clearly not the result of simply taking statisics over previously seen results. In fact, when people ask "what is reasoning?" (as they often do to dismiss opinions that LLMs can't reason), "mathematics is reasoning" is a pretty damn good answer.

Which all means that, while we may well have a "stochastic parrot" module somewhere in our mind, that is far from all we got. But the question is not _our_ mind, is LLMs and their capabilities. And we know that LLMs are statistical language models, because that's what they are made to be.

And if someone thinks that LLMs, also, like humans, are something else beyond statistical language models as they're made to be, then that someone must explain why. "It looks that way" is a very poor explanation, but so far that's all we got. Literally, people just look at LLMs' output and say "they're not just stochastic parrots!".


Nobody is saying that human brains are picnic blankets, so we aren't debating that one. I'll grant you that it's more reasonable to think that what the human brain does resembles being a "stochastic parrot" more than it does a picnic blanket, but I think it's the burden of anyone saying the brain thinks that way, to prove it, and therefore prove that stochastic parrotism is reasoning, as opposed to just alleging this possibility as an affirmative defense

But really, why would anyone think that reasoning is stochastic in that way? I never did, and do not now. That hasn't changed just because LLMs demonstrate results that in some cases are equivalent to what could be reasoned.


There's an amusing formulation of model risk here. Consider that the person prompting ChatGPT has an ability reason, say FooledApe. Then we have the problem of evaluating,

P(...P(..P(P(gptCanReason|FooledApe) | P(HumanReasoning|FooledApe))) | FooledApe)...|..FooledApe)

My preference is to removed the condition 'Fooled', alas unable to remove 'Ape'.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: