Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Human reasoning is amazingly not sound.

When you add in various patterns, double-checks, and memorized previous results, what human reasoning can do is astounding. But it is very, vary far from sound.



all currently available reasoning approaches are limited.

I guess the topic is how far GPT in reasoning is from human. We can take some simple tests:

- can GPT play chess as well as humans, as benchmark of reasoning games?

- did GPT prove some nontrivial math theorems or solve some math problems where humans couldn't find solution yet?


One thing I thought was amusing was that there was a burst of articles about Cyc that got mentioned when Doug Lenat died including this arXiv paper

https://arxiv.org/abs/2308.04445

and that one said that Cyc had over 1,100 special purpose reasoning engines. The general purpose resolution solver was nowhere near fast enough to be really useful.

Early one there was

https://en.wikipedia.org/wiki/General_Problem_Solver

which would be capable in principle of finding a winning move in a chess position but because it worked by exhaustive search it would practically take too long. The thing is that a good chess playing program is not generally intelligent just as a chess grandmaster isn't necessarily good at anything other than chess, it just has special purpose heuristics (as opposed to algorithms) that find good chess move.

ChatGPT-like systems will be greatly improved by coupling them to other systems such as "write a Python/SQL script then run it", "run a query against bing and summarize the results", and "go find the chess engine and ask it what move to make", that is, like Cyc, it will get a swiss army knife of tools that help it do things it's not good at but it doesn't create general intelligence any more than Cyc did.

Robert Penrose in the Emperor's New Mind suggests that there must be some quantum magic in the human mind because the human mind is able to solve any math problem whereas any machine is limited by Gödel's theorem. It's silly, however, because we don't humans are capable of proving any theorem: look at how we struggled with Fermat for nearly 360 years or how

https://en.wikipedia.org/wiki/Collatz_conjecture

seems not even tantalizingly out of reach.

The difference might be that humans feel bad when they get the wrong answer whereas ChatGPT certainly doesn't. (as much as its empty apology can be satisifying to people) This isn't just an attribute of humans, working with other animals such as horses I'm convinced that they feel bad when they screw up too.


> it will get a swiss army knife of tools that help it do things it's not good at but it doesn't create general intelligence any more than Cyc did

How do you know general intelligence is its own thing and not just a Swiss army knife of tools?

> because the human mind is able to solve any math problem whereas any machine is limited by Gödel's theorem

Any machine can be programmed to solve any problem at all, if the proof system is inconsistent. Which is probably exactly the case with humans. We work around it because different humans have different inconsistencies, so checking each other's work is how we average out those inconsistencies.


(As a person who went down the rabbit hole of knowledge-based systems and looked at Cyc quite a bit.)

Three forms of intelligence are (i) animal intelligence, (ii) language use, and (iii) abstract thinking.

Animals are intelligent in their own way, particularly socially intelligent. My wife runs a riding barn and it is clear to me that one of the things horses are most interested in is what the people and other horses are up to and that a horse doesn’t just have an opinion about other horses but they have an opinion about what opinion about what the other horses think about a horse. (e.g. Cyc has a system of microtheories and modalized logic that tries to get at this. Of course visual recognition and similar things are a big part of animal intelligence and boy have neural nets made progress there.)

Language is a unique capability of humans. (which Cyc made no real contribution to.)

If you get a PhD what you learn is how to develop systems of abstract thinking or at the very least go to conferences and acquire them or dig through the literature, dust them off and get them working. There is the aspect of individual creativity but also the “standing on the shoulders giants” that Newton talked about.

Before Lenat started on Cyc he was interested in expert systems for building expert systems or at the very least a set of development tools for doing the same and that was a motivation of Cyc even if the point of Cyc was to produce new knowledge bases and reasoning procedures that would live inside Cyc. The trouble is that this was a tortuous procedure and I did go through a phase of thinking about evaluating OpenCyc for a project but it had the problem that it would have taken at least six months just to get started with a project that could be finished in some other way much more quickly.

My own journey led through twists and turns but I came to see it as something like systems software development where you build tools like compilers and debuggers that transform inputs into a knowledge base and put it to work, but I very much gave up on “embedding in itself”

As for problems in general I don’t really know if they can all be solved? Isn’t it possible that there is no finite procedure to prove the Collatz conjecture?


> Language is a unique capability of humans.

No it's not. Language is well documented in dolphins, for instance. Crows have also demonstrated self awareness and ability to do arithmetic. I think your 3-part breakdown of intelligence is out of date. There's no rigourous evidence that intelligence breaks down in this way, it's just a "folk theory" at this point.


According to Gödel’s incompleteness theorem, some truths aren’t provable in the sense of acquiring proof through human reasoning and logic.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: