Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> And your answer is that it must be a priori knowledge, and are fine with Lean being one. But you don't accept that LLMs can weakly approximate theorem provers?

I said that a hypothetical system that used gen AI to interact with the world (get text, images, etc.) and then a system like Lean to synthesize judgments about those things could potentially resemble "intelligence" like humans possess.

>but I would say that this is also the case with humans

Most of the "solutions" to Gettier problems that I find compelling rely on expanding the "justified" aspect of it, and that wouldn't really work with gen AI, as it's not really possible to make logical statements about its justification, only probabilistic ones.

Wittgenstein's quote is funny, as it reminds me a bit of Kant's refutation of Cartesian duality, in which he points out that the "I" in "I think therefore I am" equivocates between subject and object.



> I said that a hypothetical system that used gen AI to interact with the world (get text, images, etc.) and then a system like Lean to synthesize judgments about those things could potentially resemble "intelligence" like humans possess.

What logically follows from this, given that LLMs demonstrate having internalised a system *like* Lean as part of their training?

That said, even in logic and maths, you have to pick the axioms. Thanks to Gödel’s incompleteness theorems, we're still stuck with the Münchhausen trilemma even in this case.

> Most of the "solutions" to Gettier problems that I find compelling rely on expanding the "justified" aspect of it, and that wouldn't really work with gen AI, as it's not really possible to make logical statements about its justification, only probabilistic ones.

Even with humans, the only meaning I can attach to the word "justified" in this sense, is directly equivalent to a probability update — e.g. "You say you saw a sheep. How do you justify that?" "It looked like a sheep" "But it could have been a model" "It was moving, and I heard a baaing" "The animatronics in Disney also move and play sounds" "This was in Wales. I have no reason to expect a random field in Wales to contain animatronics, and I do expect them to contain sheep." etc.

The only room for manoeuvre seems to be if the probability updates are Bayesian or not. This is why I reject the concept of "absolute knowledge" in favour of "the word 'knowledge' is just shorthand for having a very strong belief, and belief can never be 100%".

Descartes' "I think therefore I am" was his attempt at reduction to that which can be verified even if all else that you think you know is the result of delusion or illusion. And then we also get A. J. Ayer saying nope, you can't even manage that much, all you can say is "there is a thought now", which is also a problem for physicists viz. Boltzmann brains, but also relevant to LLMs: if, hypothetically, LLMs were to have any kind of conscious experiences while running, it would be of exactly that kind — "there is a thought now", not a continuous experience in which it is possible to be bored due to input not arriving.

(If only I'd been able to write like this during my philosophy A-level exams, I wouldn't have a grade D in that subject :P)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: