Hacker News new | past | comments | ask | show | jobs | submit login

Thanks for the summary.

I suppose the question one could then ask is "will AlphaGo's approach wind-up being emulated over time or is it going to be something like a cul-de-sac?

How many single algorithmic challenges are worth expending this much effort on? Could AlphaGo's approach be applied to other such problems? Will increasing processor speed just make all this effort moot? Is AlphaGo something like Deep Blue (the custom computer that beat Kasparov and then was dismantled rather than being developed further)?




These are all precisely the right questions to ask about this, I think.

My take is that the approaches of AlphaGo are more applicable to other problems than DeepBlue, but not by much. Rigid rules make tree search and reinforcement learning easily applicable to Go, but not so much for many real life problems. I made a small diagram to illustrate this point (http://www.andreykurenkov.com/writing/images/2016-4-15-a-bri...) as part of a series of posts about Game AI (http://www.andreykurenkov.com/writing/a-brief-history-of-gam...).

Still, the general ideas of supervised learning followed by reinforcement learning, training multiple models of varying complexities from the same dataset, and combining tree search with learned models as they did are useful general ideas. Hybrid methods as a whole will become increasingly common, I think (no doubt self driving cars already are very complicated hybrid models).




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: