Hacker News new | past | comments | ask | show | jobs | submit login

Top-down style of AI capability denial. You got an abstract idea, like "next token prediction" and that's all you need to know. Doesn't matter what the AI does, it has no chance to prove it reasons.

Why not look at what the model does, maybe in all those tensors there are reasoning principles encoded.




I've spent the last decade in the most technical parts of this industry; it is my job to expunge this credulousness.

It is indeed trivial to show that P(A|B) is a poor model of "B => A" (and B causes A, and many other relata). Software engineers, philosophers and experimental scientists seem pretty good at seeing this --- people who "convert into" engineering are totally dumbfounded by it.

P(A|B) becomes an increasingly 'useful' model of arbitrary relations as the implicit model P_m(A|B) grows to include all instances of A,B. That's what digitising all of human history and storing it in the weights of an LLM does.

This all follows from basic stats you'd be taught in an applied statistics course; one never taken by most in the ML industry.

(Note its still a broken model because there's an infinite number of novel instances of (A,B) pairs in most cases that cannot be modelled with this sort of inductive learning).

Engineering at its heart, is a kind of pseudoscience (or, if you prefer: a magic trick). You find some heuristic which behaves as-if its the target under fragile but engineering-stable conditions.

The problem with engineers who only have magic tricks in their toolkit is this credulousness. Homeopathy worked: you put people in beds and give them water and they recover (indeed, better than leaching them).


> people who "convert into" engineering are totally dumbfounded by it.

How does that human failure to reason affect the evaluation of GPT's reasoning capabilities?

You're being very dogmatic that anyone who has a different opinion than you are "desperate to not thinking clearly" or "gullible", but you're exhibiting errors in reasoning that are rather ironic given the context.

Edit:

You don't know how human reasoning works - no-one does. People tend to assume that our post-hoc conscious ability to "understand" the reasoning process must somehow relate to the actual operations the brain performs when reasoning. But that's not necessarily the case.

Note that I'm not claiming that LLMs are equivalent to human in their reasoning ability: after all, in some cases they're functionally demonstrably superior, especially compared to the average human. But in other significant cases, they're certainly worse.

The point is we shouldn't impose too many assumptions on how reasoning "should" work, and there seems to be a strong tendency to do that, which you're exhibiting.


Maybe this is a clearer way of thinking about the problem:

Imagine you call a friend for help on an essay (on any topic of your choice). But they have access to google and libgen and they're quick at looking.

For awhile, on the call, you think your friend is a genius but then you get suspicious. How would you tell if they knew what they were talking about?

NB. whether they do or not has nothing to do with whatever qs you come up with -- they do or they dont. And, indeed, people are easy to fool.

You might ask, "why care if they know?" but then you have serious questions on things which matter, not just essays. And googling isnt good enough.

Being able to weight information by expertise, engage in reasoning in which contradictions are impermissible, engage in causal inference, in abduction -- in imagining possibilities which have never occurred exactly before ------ these suddenly become vital.

And your friend who is not a PhD in anything, let alone everything, suddenly becomes vastly more dangerous and insidious.


> they do or they dont

Only a Sith...

Seriously, the idea that "reasoning" could be a binary property seems very disconnected from the reality of the situation.

> reasoning in which contradictions are impermissible

Every human on the planet would fail that test, including you.

"Fooling apes" is much easier when the ape you need to fool is yourself.


So from your analogy, you're saying the average high school student (approximately the "friend" in your scenario) is incapable of reasoning? Sure, they might be bad at it (and know nothing about formal methods), but most people's definition of reasoning is nowhere near that strict.


>How does that human failure to reason affect the evaluation of GPT's reasoning capabilities?

Isn't that your argument? That not knowing how humans reason means that we can't say that GPT isn't reasoning?


Part of what I'm pointing out is that not knowing how humans reason means that we can't say that some particular mechanism - like LLMs - is not capable, in principle, of reasoning. There seems to be a strong tendency to dismiss what LLMs are doing based on the idea that they're "stochastic parrots" or whatever. But we don't actually know that we're not "just" stochastic parrots.

Of course, there's the possibly thorny question of consciousness, but we actually have no reason to believe that that's required for reasoning or intelligence.


>> But we don't actually know that we're not "just" stochastic parrots.

Sorry to barge in, but we have a fairly good reason to believe we are at the very least not _just_ stochastic parrots: all of mathematics, which is clearly not the result of simply taking statisics over previously seen results. In fact, when people ask "what is reasoning?" (as they often do to dismiss opinions that LLMs can't reason), "mathematics is reasoning" is a pretty damn good answer.

Which all means that, while we may well have a "stochastic parrot" module somewhere in our mind, that is far from all we got. But the question is not _our_ mind, is LLMs and their capabilities. And we know that LLMs are statistical language models, because that's what they are made to be.

And if someone thinks that LLMs, also, like humans, are something else beyond statistical language models as they're made to be, then that someone must explain why. "It looks that way" is a very poor explanation, but so far that's all we got. Literally, people just look at LLMs' output and say "they're not just stochastic parrots!".


Nobody is saying that human brains are picnic blankets, so we aren't debating that one. I'll grant you that it's more reasonable to think that what the human brain does resembles being a "stochastic parrot" more than it does a picnic blanket, but I think it's the burden of anyone saying the brain thinks that way, to prove it, and therefore prove that stochastic parrotism is reasoning, as opposed to just alleging this possibility as an affirmative defense

But really, why would anyone think that reasoning is stochastic in that way? I never did, and do not now. That hasn't changed just because LLMs demonstrate results that in some cases are equivalent to what could be reasoned.


There's an amusing formulation of model risk here. Consider that the person prompting ChatGPT has an ability reason, say FooledApe. Then we have the problem of evaluating,

P(...P(..P(P(gptCanReason|FooledApe) | P(HumanReasoning|FooledApe))) | FooledApe)...|..FooledApe)

My preference is to removed the condition 'Fooled', alas unable to remove 'Ape'.


Maybe science has a formal process somewhere but it is mostly an open ended iterative exploration of ideas, especially around emerging fields.


What is a "reasoning principle" and how might it be "encoded"?

Also are we really calling it "denial" now? Because it's a little funny to make the move of simply psychologizing away the criticism, when your actual AI argument, presumably, is that inner consciousness/rationality is black boxes all the way down anyway. Like, how can you presume to look inside the mind of your critic so confidently, to assert they are in denial, but in the same breath say that such a thing is in principal impossible? Don't you think it maybe takes away the force of your argument? Or at least goes against its spirit?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: