Hacker News new | past | comments | ask | show | jobs | submit login

Deep Blue:

Massive search +

Hand-coded search heuristics +

Hand-coded board position evaluation heuristics [1]

AlphaGo:

Search via simulations (Monte Carlo Tree Search) +

Learned search heuristics (policy networks) +

Learned patterns (value networks) [2]

Human strongholds seem to be our ability to learn search heuristics and complex patterns. We can perform some simulations but not nearly as extensively as what machines are capable of.

The reason Kasparov could hold himself against Deep Blue 200,000,000-per-second search performance during their first match was probably due to his much superior search heuristics to drastically focus on better paths and better evaluation of complex positions. The patterns in chess, however, may not be complex enough that better evaluation function gives very much benefits. More importantly, its branching factor after using heuristics is low enough such that massive search will yield substantial advantage.

In Go, patterns are much more complex than chess with many simultaneous battlegrounds that can potentially be connected. Go’s Branching factor is also multiple-times higher than Chess’, rendering massive search without good guidance powerless. These in turn raise the value of learned patterns. Google stated that its learned policy networks is so strong “that raw neural networks (immediately, without any tree search at all) can defeat state-of-the-art Go programs that build enormous search trees”. This is equivalent to Kasparov using learned patterns to hold himself against massive search in Deep Blue (in their first match) and a key reason Go professionals can still beat other Go programs.

AlphaGo demonstrates that combining algorithms that mimic human abilities with powerful machines can surpass expert humans in very complex tasks.

The big questions we should strive to answer before it is too late are:

1) What trump cards humans still hold against computer algorithms and massively parallel machines?

2) What to do when a few more breakthroughs have enabled machines to surpass us in all relevant tasks?

Note: It is not entirely clear from the IBM article that the search heuristics is hand-coded, but it seems likely from the prevalent AI technique at the time.

[1] https://www.research.ibm.com/deepblue/meet/html/d.3.2.html [2] http://googleresearch.blogspot.com/2016/01/alphago-mastering...




Strong AI is not necessarily a bad thing. Instead of worrying about questions 1 & 2, we could be thinking less about constraint and competition with AI and more about cooperation and goal-orientation: e.g. the work of Yudkowsky (https://intelligence.org/files/CFAI.pdf) or some of the thoughts provided by Nick Bostrom http://nickbostrom.com/. Goal-orientation is preferable to capability constraint because the potential benefits are far larger.

Tl;dr: I, for one, welcome our robot overlords (so long as they don't behave like our robot overlords).


I agree that superintelligence could bring enormous benefits to humanity but the risks are very high as well. They are in fact existential risks, as detailed in the book Superintelligence by Bostrom.

That is why we need to invest much more research efforts on Friendly AI and trustworthy intelligent systems. People should consider contribute to MIRI (https://intelligence.org/) where Yudkowsky, who helped pioneer this line of research, works as a senior fellow.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: