Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Once again: if the sensors don't work for the problem domain, the quantity of data gathered is irrelevant.

If you mounted a forward-facing video camera on every car in in the world and gathered the data for decades, you'd still be nowhere: you're missing the side and rear views. This is a thought exercise, but it demonstrates the point. If your robot car has a blind spot, all the data in the world won't fix it.

Nobody knows how good these cameras are, but every camera-based system so far has had the same critical limitations: they don't work well at night or in poor visibility.



But we know that the sensor suite must be at least as good as a human; more vision, plus other sensors. Therefore we know the sensor suite is sufficient to be as good as (and likely better than) a human.


We don't, a badly placed wet leaf, some amount of snow or vibrations may well impact the sensors in ways they wouldn't a human driver.


It's not. For example, the eye has much higher dynamic range than any camera sensor available today. Try taking a picture at night that looks half as decent as it does in your head.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: