If it displays the outwards appearances of reasoning then it is reasoning. We don't evaluate humans any differently. There's no magic intell-o-meter that can detect the amount of intelligence flowing through a brain.
Anything else is just an argument of semantics. The idea that there is "true" reasoning and "fake" reasoning but that we can't tell the latter apart from the former is ridiculous.
You can't eat your cake and have it. Either "fake reasoning" is a thing and can be distinguished or it can't and it's just a made up distinction.
If I have a calculator with a look-up table of all additions of natural numbers under 100, the calculator can "appear" to be adding despite the fact it is not.
Yes, indeed. Bullets know how to fly, and my kettle somehow knows that water boils at 373.15K! There's been an explosion of intelligence since the LLMs came about :D
This argument would hold up if LMs were large enough to hold a look-up table of all possible valid inputs that they can correctly respond to. They're not.
Until you ask it to add number above 100 and it falls apart. That is the point here. You found a distinction. If you can't find one then you're arguing semantics. People who say LLMs can't reason are yet to find a distinction that doesn't also disqualify a bunch of humans.
Anything else is just an argument of semantics. The idea that there is "true" reasoning and "fake" reasoning but that we can't tell the latter apart from the former is ridiculous.
You can't eat your cake and have it. Either "fake reasoning" is a thing and can be distinguished or it can't and it's just a made up distinction.