Hacker News new | past | comments | ask | show | jobs | submit login

> The machine learning techniques that were developed and enhanced during the last decade are not magical, like any other machines/software.

You might be using a different definition of "magical" than what others are using in this context.

Of course, when you break down ML techniques, it's all just math running on FETs. So no, it's not extra-dimensional hocus pocus, but absolutely nobody is using that particular definition.

We've seen unexpected superhuman performance from ML, and in many cases, it's been inscrutable to the observer as to how that performance was achieved.

Think move 37 in game #2 of Lee Sedol vs. AlphaGo. This move was shocking to observers, in that it appeared to be "bad", but was ultimately part of a winning strategy for AlphaGo. And this was all done in the backdrop of sudden superhuman performance in a problem domain that was "safe from ML".

When people use the term "magic" in this context, think of "Any sufficiently advanced technology is indistinguishable from magic" mixed with the awe of seeing a machine do something unexpected.

And don't forget, the human brain is just a lump of matter that consumes only 20W of energy to achieve what it does. No magic here either, just physics. Synthetically replicating (and completely surpassing) its functionality is a question of "when", not "if".




Was Go ever "safe from ML" as opposed to "[then] state of the art can't even play Go without a handicap"? Seems like exactly the sort of thing ML should be good at; approximating Nash equilibrium responses in a perfect information game with a big search space (and humans setting a low bar as we're nowhere near finding an algorithmic or brute force solution). Is it really magical that computers running enough simulations exposes limitations to human Go theory (arguably one interesting lesson was that humans were so bad at playing that AlphaGoZero was better off not having its dataset biased by curated human play)? Yes, it's a clear step forward compared with only being able to beat humans at games which can be fully brute forced, or a pocket calculator being much faster and reliable than the average humans at arithmetic due to a simple, tractable architecture, but also one of the least magical-seeming applications given we already had the calculators and chess engines (especially compared with something like playing Jeopardy) unless you had unjustifiably strong priors about how special human Go theory was.

I think people are completely wrong to pooh pooh the utility of computers being better at search and calculations in an ever wider range of applied fields, but linking computers surpassing humans at more examples of those problems to certainty we'll synthetically replicate brain functionality we barely understand is the sort of stretch which is exactly why AGI-sceptics feel the need to point out that this is just a tool iterating through existing programs and sticking lines of code together until the program outputs the desired output, not evidence of reasoning in a more human-like way.


I strongly disagree that we've seen anything unexpected so far.

AlphaGo is nothing else than brute force.

And brute force can go a long way, it should not be underestimated.

But so far, this approach has not let to emergent behaviors, the ML blackbox is not giving back more than what was fed.


AlphaGo is decidedly not brute force, under any meaningful definition of the term. It's monte carlo tree search, augmented by a neutral network to give stronger priors on which branches are worth exploring. There is an explore/exploit trade-off to manage, which takes it out of the realm of brute force. The previous best go programs used Monte Carlo tree search alone, or with worse heuristics for the priors. Alpha Go improves drastically on the priors, which is arguably exactly the part of the problem that one would attribute to understanding the game: Of the available moves, which ones look the best?

They used a fantastic amount of compute for their solution, but, as has uniformly been the case for neutral networks, the compute required for both training and inference has dropped rapidly after the initial research result.


> AlphaGo is nothing else than brute force.

This statement is completely false with accepted definitions of "brute force" in the context of computer science.


If recent philosophy taught us anything it's that brains are special. The hard problem of consciousness shows science is insufficient to raise to the level of entitlement of humans, we're exceptions flying over the physical laws of nature, we have free will, first person POV, and other magical stuff like that. Or we have to believe in panpsychism or dualism, like in the middle ages. Anything to lift the human status.

Maybe we should start by "humans are the greatest thing ever" and then try to fit our world knowledge to that conclusion. We feel it right in our qualia that we are right, and qualia is ineffable.


> The hard problem of consciousness shows science is insufficient to raise to the level of entitlement of humans, we're exceptions raising over the physical laws of nature, we have free will and other magical stuff like that.

That's not my understanding of the 'hard problem of consciousness'. Admittedly, all I know about the subject is what I've heard from D Chalmers in half-a-dozen podcast interviews.

Can you point to a definitive source?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: