I think you're mistaking "a priori" with "prior." A priori is a philosophical term meaning knowledge acquired through deductive reasoning rather than empirical observation.
Thanks for the explanation.. It still does not make sense to me.. A novel solution without deductive reasoning or a novel solution without empirical observation?
To be honest, I don't think their definition of intelligence is very coherent. I was just being pedantic.
But if I had to guess, I believe they'd argue that an LLM is basically all a priori knowledge. It is trained on a massive data set and all it can do once trained is reason from those initial axioms (they aren't really axioms, but whatever). While humans, and actually many other animals to a lesser extent, can make observations, challenge existing assumptions, generalize to solve problems, etc.
That's not exactly my definition of intelligence, but that might be what they were going for.
Humans derive their ideas from impressions (sensory experiences) and the ideas they form are essentially recombinations or refinements of those impressions. In this sense, human creativity can be viewed as a process of combining, transforming, and reinterpreting past experiences (impressions).
So, if we look at it from this perspective, human thinking is not fundamentally different from LLMs in that both rely on existing material to create new ideas.
The main difference is that LLMs process text statistically, while humans interpret text in context, influenced by emotions, experiences, biases, and goals. LLMs' interpretation is probabilistic, not conceptual.
Additionally, revolutionary thinking often requires rejecting past ideas and forming new conceptual frameworks, but LLMs cannot reject prior data, they are bound by it.
At any rate, the question remains, are LLMs capable of revolutionary ideas just like humans?
But the major difference between the human perceptual apparatus and data fed to an LLM is that humans are, in a linear temporal fashion, experiencing a physical world that exists outside of our perception. Our observations aren't just large volumes of unstructured data with purely statistical relevance to each other. Instead, we attempt to model the world via objects existing in relative position to each other and events occurring at various point in a timeline. The result is a complex model of cause and effect, actors and things being acted on, etc.
In that way, my dog is far more intelligent than LLM, in that he has a mental model of his world. An LLM is only intelligent relative to a human actor, and so it is no different than any other technology that humans have created to pursue their own ends.
I am struggling to think of anything that can be considered a solution and can be created without "priori" knowledge.