Hacker News new | past | comments | ask | show | jobs | submit login

Excuse my presumption, but it seems that you arrive at the logical conclusion "it might be possible to simulate intelligence with a sufficiently big cheat sheet" - and then you disregard it because you're uncomfortable with it. We already know this is the case for specialized environments, so the "only" question left is how far does this generalize.

In my opinion, more ridiculous claims have already been proven by science (for example Quantum Mechanics).

Also you have to make a distinction between the optimizing process (evolution/training neural nets) and the intelligent agent itself (human/machine intelligence).




I don’t disregard it. It isn’t about discomfort. In fact, I think that “solving checkers” is very useful, if your goal is to get the highest quality answers in checkers.

The problem I have is comparing that to having a dog speak English. It’s totally wrong. You had access to all these computing resources, and the sum total of millions of work by humans. You didn’t bootstrap from nothing like AlphaZero did, but just remixed all possible interesting combinations, then selected the ones you liked. And you try to compare this “top down” approach to a bottom-up one?

The top down approach may give BETTER answers and be MORE intelligent. But the way it arrives at this is far less impressive. In fact, it would be rather expected.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: