Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

And once have an AI engine that, given just the instructions on how to drive a car and a list of road rules, can operate one perfectly, I'd agree we're a huge step closer to an AGI (if it can also learn how to do all the other things most humans can just given similar inputs, then sure, it would qualify unreservedly).



Sure, and at that point we can shift the goalposts to some other task since driving (like chess) will seem easy in retrospect.

Put another way, what would a system which has taught itself to drive tell us about general intelligence that we didn’t already know? Because as of now it seems like the pattern is

Computers could never do X

Computers can’t do X

Computers can’t do X very well

Computers can’t do X well in some cases

X wasn’t really a test of AGI because it’s just <algorithm to do X>


Well, let’s think about it from the opposite direction.

Say we built a general system without teaching it anything about driving. We discover that it can drive at a human level. Would we then be surprised if we discover that it cannot solve any other complex tasks at a human level?

I say yes, we would be surprised. I think that driving well requires enough general intelligence and that any system that solves it will be able to also, say, pass a high school algebra class or cook a meal in an unfamiliar kitchen. There can be no further goalpost moving at that point.


> Sure, and at that point we can shift the goalposts to some other task

If you like. But I'm happy with where I have them. I'm also pretty confident I'll see that goal reached in my lifetime.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: