Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You're comparing training a model from scratch in ML, to the equivalent of model fine-tuning in humans. It's unfair and incorrect. Eg, no human can drive without first learning to operate their limbs, recognise shapes etc.

The GP is pointing out that training in fine muscle motor skills, self-awareness and ability to project self-awareness to other objects under ones control etc, all took many thousands of years to develop. AI is faster.

However, it's again unfair, as AI only knows what it knows from us, so in that sense any comparison is built on shaky ground.

But for the purposes of comparing a stock human brain as hardware, versus a current high-end GPU specifically in terms of ingesting information and then perform tasks, the GPU beats the human brain "hands-down" in any category.

The only categories it doesn't are simply ones that no one has trained it to yet - so the argument on a pure hardware capability basis stands.



Still, the 300,000 years figure is way off. That's an anatomically modern human and they most likely could be trained to operate a car much like a human of today can. Getting to that point took billions of years, but producing the driver of a car was never the goal of that process.

It's just the wrong way to look at the problem. You're not trying to develop the generic system that can learn how to drive a car, you're trying to develop the specific system that can safely drive a car occupied by humans, naturally employing machine learning.

I would argue that we're 95% there, but solving those last 5% is exponentially more expensive, but not commensurately more valuable. There's a "profit ceiling" imposed by the cost of a human driver, which appeats to make solving the problem economically intractable.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: