Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Nothing to do with it? You certainly don’t mean that. The software running an LLM is causally involved.

Perhaps you can explain your point in a different way?

Related: would you claim that the physics of neurons has nothing to do with human intelligence? Certainly not.

You might be hinting at something else: perhaps different levels of explanation and/or prediction. These topics are covered extensively by many thinkers.

Such levels of explanation are constructs used by agents to make sense of phenomena. These explanations are not causal; they are interpretative.



> Nothing to do with it? You certainly don’t mean that. The software running an LLM is causally involved.

Not in the way that would apply problem of non-computability of Turing machine.

> Perhaps you can explain your point in a different way?

LLM is not a logic program finding perfect solution to a problem, it's a statistical model to find next possible word. The model code does not solve a (let's say) NP problem to find solution to a puzzle, the only thing is doing is finding next best possible word through statistical models built on top of neural networks.

This is why I think Gödel's theorem doesn't apply here, as the LLM does not encode strict and correct logical or mathematical theorem, that would be incomplete.

> Related: would you claim that the physics of neurons has nothing to do with human intelligence? Certainly not.

I agree with you, though I had different angle in mind.

> You might be hinting at something else: perhaps different levels of explanation and/or prediction. These topics are covered extensively by many thinkers. > Such levels of explanation are constructs used by agents to make sense of phenomena. These explanations are not causal; they are interpretative.

Thank you, that's food for thought.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: