I ignored your other response to me because I didn't see anything in the abstract that contradicted my posts, but maybe there's something deeper in the paper that does. I'll read more of it later.
I think, though, the disconnect between us is that I don't see this:
> Explain LLM output that is correct and novel.
As something I need to do for my position to be strong. It would be if I'd made different claims, but I haven't made those claims. I can see parts of the paper's abstract that would also be relevant and tough to deal with if I'd made those other claims, so I'm guessing those are the parts you think I need to focus on, but I'm not disputing stuff like (paraphrasing) "LLMs may produce output that follows the pattern of a form of reasoned argument they were trained on, not just the particulars" from the abstract. Sure, maybe they do, OK.
I don't subscribe to (and don't really understand the motivation for) claims that generative AI can't produce output that's not explicitly in their training set, which is a claim I have seen and (I think?) one you're taking me as promoting. Friggin' Markov chain text generators can, why couldn't LLMs? Another formulation is that everything they output is a result of what they were trained on, which is stronger but only because it's, like, tautologically true and not very interesting.
And you say you don’t have to explain why LLMs output novel and correct answers. Well I asked for this because the fact that LLMs output correct and novel answers disproves your point. I disproved your point. Think about it. You think I introduced some orthogonal topic but I didn’t. I stated a condition that the LLM meets that disproves your claim.
So if there exists a prompt and answer pair such that the answer is so novel the probability of it being a correlation or random chance is extraordinarily low then your point is trivially wrong right?
Because if the answer wasn’t arrived by some correlative coincidence which you seem to be claiming then the only other possible way is reasoning.
Again such question and answer pairs actually exist for LLMs. They can be trivially generated like my shared link above which it talks about the entire topic of this sub thread without training data to support it.
Humans fail at reasoning all the time yet we don’t say humans can’t reason. So to keep consistency for the criterion we use for humans. If the LLM reasoned it means it can reason even if it clearly gives the wrong answer sometimes.
Additionally you likely claim all humans can reason. What’s your criterion for that? When you look at a human sometimes it outputs correct and novel answers that are not part of its training data (experiences).
It’s literally the same logic but you subconsciously move the goal posts to be much much higher for an LLM. In fact under this higher criterion all mentally retarded humans and babies can’t reason at all.
I think, though, the disconnect between us is that I don't see this:
> Explain LLM output that is correct and novel.
As something I need to do for my position to be strong. It would be if I'd made different claims, but I haven't made those claims. I can see parts of the paper's abstract that would also be relevant and tough to deal with if I'd made those other claims, so I'm guessing those are the parts you think I need to focus on, but I'm not disputing stuff like (paraphrasing) "LLMs may produce output that follows the pattern of a form of reasoned argument they were trained on, not just the particulars" from the abstract. Sure, maybe they do, OK.
I don't subscribe to (and don't really understand the motivation for) claims that generative AI can't produce output that's not explicitly in their training set, which is a claim I have seen and (I think?) one you're taking me as promoting. Friggin' Markov chain text generators can, why couldn't LLMs? Another formulation is that everything they output is a result of what they were trained on, which is stronger but only because it's, like, tautologically true and not very interesting.