But how can you be sure? You talk with confidence as if evidence exists to prove what you say but none of this evidence exists. It’s almost as if you’re an LLM yourself making up a claim with zero evidence. Sure you have examples that correlate with your point but nothing that proves your point.
Additionally there exists LLM output that runs counter to your point. Explain LLM output that is correct and novel. There exists correct LLM output on queries that are so novel and unique they don’t exist in any form in the training data. You can easily and I mean really easily make an LLM produce such output.
Again you’re making up your answer here without proof or evidence which is identical to the extrapolation the LLM does. And your answer runs counter to every academic author on that paper. So what I don’t understand from people like you is the level of unhinged confidence that runs border to religion.
Like you were talking about how the wrongness of certain LLM output make the distinction clearest while obviously ignoring the output that makes it unclear.
It’s utterly trivial to get LLMs to output things that disprove your point. But what’s more insane is that you can get LLMs to explain all of what’s being debated in this thread to you.
I ignored your other response to me because I didn't see anything in the abstract that contradicted my posts, but maybe there's something deeper in the paper that does. I'll read more of it later.
I think, though, the disconnect between us is that I don't see this:
> Explain LLM output that is correct and novel.
As something I need to do for my position to be strong. It would be if I'd made different claims, but I haven't made those claims. I can see parts of the paper's abstract that would also be relevant and tough to deal with if I'd made those other claims, so I'm guessing those are the parts you think I need to focus on, but I'm not disputing stuff like (paraphrasing) "LLMs may produce output that follows the pattern of a form of reasoned argument they were trained on, not just the particulars" from the abstract. Sure, maybe they do, OK.
I don't subscribe to (and don't really understand the motivation for) claims that generative AI can't produce output that's not explicitly in their training set, which is a claim I have seen and (I think?) one you're taking me as promoting. Friggin' Markov chain text generators can, why couldn't LLMs? Another formulation is that everything they output is a result of what they were trained on, which is stronger but only because it's, like, tautologically true and not very interesting.
And you say you don’t have to explain why LLMs output novel and correct answers. Well I asked for this because the fact that LLMs output correct and novel answers disproves your point. I disproved your point. Think about it. You think I introduced some orthogonal topic but I didn’t. I stated a condition that the LLM meets that disproves your claim.
So if there exists a prompt and answer pair such that the answer is so novel the probability of it being a correlation or random chance is extraordinarily low then your point is trivially wrong right?
Because if the answer wasn’t arrived by some correlative coincidence which you seem to be claiming then the only other possible way is reasoning.
Again such question and answer pairs actually exist for LLMs. They can be trivially generated like my shared link above which it talks about the entire topic of this sub thread without training data to support it.
Humans fail at reasoning all the time yet we don’t say humans can’t reason. So to keep consistency for the criterion we use for humans. If the LLM reasoned it means it can reason even if it clearly gives the wrong answer sometimes.
Additionally you likely claim all humans can reason. What’s your criterion for that? When you look at a human sometimes it outputs correct and novel answers that are not part of its training data (experiences).
It’s literally the same logic but you subconsciously move the goal posts to be much much higher for an LLM. In fact under this higher criterion all mentally retarded humans and babies can’t reason at all.
Additionally there exists LLM output that runs counter to your point. Explain LLM output that is correct and novel. There exists correct LLM output on queries that are so novel and unique they don’t exist in any form in the training data. You can easily and I mean really easily make an LLM produce such output.
Again you’re making up your answer here without proof or evidence which is identical to the extrapolation the LLM does. And your answer runs counter to every academic author on that paper. So what I don’t understand from people like you is the level of unhinged confidence that runs border to religion.
Like you were talking about how the wrongness of certain LLM output make the distinction clearest while obviously ignoring the output that makes it unclear.
It’s utterly trivial to get LLMs to output things that disprove your point. But what’s more insane is that you can get LLMs to explain all of what’s being debated in this thread to you.
https://chatgpt.com/share/674dd1fa-4934-8001-bbda-40fe369074...