Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

One question is how do you know that you (or humans in general) aren't also just applying statistical language rules, but are convincing yourself of some underlying narrative involving logical rules? I don't know the answer to this.


We engage in many exercises in deterministic logic. Humans invented entire symbolic systems to describe mathematics without any prior art in a dataset. We apply these exercises in deterministic logic to reality, and reality confirms that our logical exercises are correct to within extremely small tolerances, allowing us to do mind-boggling things like trips to the moon, or engineering billions of transistors organized on a nanometer scale and making them mimick the appearance of human language by executing really cool math really quickly. None of this could have been achieved from scratch by probabilistic behaviour modelled on a purely statistical analysis of past information, which is immediately evident from the fact that, as mentioned, an LLM cannot do basic arithmetic, or any other deterministic logical exercise in which the answer cannot be predicted from already being in the training distribution, while we can. People will point to humans sometimes making mistakes, but that is because we take mental shortcuts to save energy. If you put a gun to our head and say "if you get this basic arithmetic problem wrong, you will die" we will reason long enough to get it right. People try prompting that with LLMs, and they still can't do it, funnily enough.


I suspect you are right, but I don't think it's all as obvious as you make out. I still don't see why - in principle - any of that cannot be achieved by a so-called "agentic" LLM with goals and feedback mechanisms.

Don't get me wrong, I agree that general LLMs are still pretty bad at basic mathematical reasoning. I just tested Claude with a basic question about prime factoring [0] and there seems to be little improvement in this sort of question in the last 3 years. But there's a chance that they will become good enough that any errors they make are practically imperceptible to humans, at which point we have to question whether what's happening in our own heads is substantially different.

It's worth noting that at this point you appear to be siding with the "inductive" meaning of intelligence from the article, i.e. intelligence is about (or is demonstrated by) achieving certain behaviours.

Also, there are fairly basic calculations where many humans will err indefinitely unless explicitly shown otherwise, e.g. probabilistic reasoning such as the Monty Hall problem.

[0] https://claude.ai/share/e22e43c3-7751-405d-ba00-319f1a85c9ad




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: