Hacker News new | past | comments | ask | show | jobs | submit login

Any intelligence that can synthesize knowledge with or without direct experience.





So ChatGPT? Or maybe that can't "really synthesize"?

How would ChatGPT come up with something truly novel, not related to anything it's ever seen before?

Has a human ever done that?

We obviously can, otherwise where do our myriad of complex concepts, many of which aren't empirical, come from? How could we have modern mathematics unless some thinker had devised the various ways of conceptualizing and manipulating numbers? This is a very old question [1] with a number of good answers as to how a human can [2].

1: https://plato.stanford.edu/entries/hume/#CopyPrin

2: https://en.wikipedia.org/wiki/Analytic%E2%80%93synthetic_dis...


As you link to The Copy Principle: it, or at least that summary of it, appears to be very much what AI do.

As a priori knowledge is all based on axioms, I do not accept that it is an example of "something truly novel, not related to anything it's ever seen before". Knowledge, yes, but not of the kind you describe. And this would still be the case even if LLMs couldn't approximate logical theorem provers, which they can: https://chatgpt.com/share/685528af-4270-8011-ba75-e601211a02...


You'd have to pick something that fits:

> come up with something truly novel, not related to anything it's ever seen before?

I've never heard of a human coming up with something that's not related to anything they've ever seen before. There is no concept in science that I know of that just popped into existence in somebody's head. Everyone credits those who came before.


Here you say:

> with or without

But in the other reply, you're asking for:

> something truly novel, not related to anything it's ever seen before

So, assuming the former was a typo, you only believe in a priori knowledge, e.g. maths and logic?

https://en.wikipedia.org/wiki/A_priori_and_a_posteriori

I mean, LLMs can and do help with this even though it's not their strength; that's more of a Lean-type-problem: https://en.wikipedia.org/wiki/Lean_(proof_assistant)


Yeah I was specifically asking for synthetic a priori knowledge, which AI by definition can't provide. It can only estimate the joint distribution over tokens, so anything generated from it is by definition a posteriori. It can generate novel statements, but I don't think there's any compelling definition of "knowledge" (including the common JTB one) that could apply to what it actually is (it's just the highest probability semiotic result). And in fact, going by the JTB definition of knowledge, AI models making correct novel statements would just be an elaborate example of a Gettier problem.

I think LLMs as a symbolic layer (effective, as a "sense organ") with some kind of logical reasoning engine like everyone loved decades ago could accomplish something closer to "intelligence" or "thinking", which I assume is what you were implying with Lean.


My example with Lean is that it's specifically a thing that does a priori knowledge: given "A implies B" and "A", therefore "B". Or all of maths from the chosen axioms.

So, just to be clear, you were asked:

> What does "real intelligence" mean?

And your answer is that it must be a priori knowledge, and are fine with Lean being one. But you don't accept that LLMs can weakly approximate theorem provers?

FWIW, I agree that the "Justified True Belief" definition of knowledge leads to such conclusions as you draw, but I would say that this is also the case with humans — if you do this, then the Gettier problems show that even humans only have belief, not knowledge: when you "see a sheep in a field", you may be later embarrassed to learn that what you saw was a white coated Puli and there was a real sheep hiding behind a bush, but in the moment the subjective experience of your state of "knowledge" is exactly the same as if you had, in fact, seen a sheep.

Just, be careful with what is meant by the word "belief", there's more than one way I can also contradict Wittgenstein's quote on belief:

> If there were a verb meaning "to believe falsely," it would not have any significant first person, present indicative.

Depending on what I mean by "believe", and indeed "I" given that different parts of my mind can disagree with each other (which is why motion sickness happens).


> And your answer is that it must be a priori knowledge, and are fine with Lean being one. But you don't accept that LLMs can weakly approximate theorem provers?

I said that a hypothetical system that used gen AI to interact with the world (get text, images, etc.) and then a system like Lean to synthesize judgments about those things could potentially resemble "intelligence" like humans possess.

>but I would say that this is also the case with humans

Most of the "solutions" to Gettier problems that I find compelling rely on expanding the "justified" aspect of it, and that wouldn't really work with gen AI, as it's not really possible to make logical statements about its justification, only probabilistic ones.

Wittgenstein's quote is funny, as it reminds me a bit of Kant's refutation of Cartesian duality, in which he points out that the "I" in "I think therefore I am" equivocates between subject and object.


> I said that a hypothetical system that used gen AI to interact with the world (get text, images, etc.) and then a system like Lean to synthesize judgments about those things could potentially resemble "intelligence" like humans possess.

What logically follows from this, given that LLMs demonstrate having internalised a system *like* Lean as part of their training?

That said, even in logic and maths, you have to pick the axioms. Thanks to Gödel’s incompleteness theorems, we're still stuck with the Münchhausen trilemma even in this case.

> Most of the "solutions" to Gettier problems that I find compelling rely on expanding the "justified" aspect of it, and that wouldn't really work with gen AI, as it's not really possible to make logical statements about its justification, only probabilistic ones.

Even with humans, the only meaning I can attach to the word "justified" in this sense, is directly equivalent to a probability update — e.g. "You say you saw a sheep. How do you justify that?" "It looked like a sheep" "But it could have been a model" "It was moving, and I heard a baaing" "The animatronics in Disney also move and play sounds" "This was in Wales. I have no reason to expect a random field in Wales to contain animatronics, and I do expect them to contain sheep." etc.

The only room for manoeuvre seems to be if the probability updates are Bayesian or not. This is why I reject the concept of "absolute knowledge" in favour of "the word 'knowledge' is just shorthand for having a very strong belief, and belief can never be 100%".

Descartes' "I think therefore I am" was his attempt at reduction to that which can be verified even if all else that you think you know is the result of delusion or illusion. And then we also get A. J. Ayer saying nope, you can't even manage that much, all you can say is "there is a thought now", which is also a problem for physicists viz. Boltzmann brains, but also relevant to LLMs: if, hypothetically, LLMs were to have any kind of conscious experiences while running, it would be of exactly that kind — "there is a thought now", not a continuous experience in which it is possible to be bored due to input not arriving.

(If only I'd been able to write like this during my philosophy A-level exams, I wouldn't have a grade D in that subject :P)




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: