Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The main problem with self-driving cars is that they can't "read" humans' body language. A human driver can see pedestrians and cyclists (and other cars) and have a rough idea of what they're likely to do in the next few seconds, i.e. the pedestrian leaning out on the crosswalk curb is likely to step into the road soon. Even a reaction time of milliseconds can't make up for the (currently exclusive) human ability to read other humans and prepare accordingly.


They also fail to "write" human body language. Nobody else can predict what the autonomous vehicle will do.

It gets worse when a person is sitting in what appears to be the driver's seat. If the car is stopped and that person is looking toward another passenger or down at a phone, nobody will expect the vehicle to begin moving. Making eye contact with the person in that seat is meaningless, but people will infer meaning.


Great point. So much of our daily driving relies on the exchange of subtle social cues with other drivers.


that is an interesting point. cycling, I often am forced to rely on reading the driver's intentions-something I don't really want to have to do; signals obtained from person sitting in the driver seat but not operating the vehicle could be totally irrelevant to predicting the behavior of the vehicle itself (and cars are not equipped to really signal those things very well).


I have never driven a car. I walk, bike, skateboard everywhere. If there isn't a light I require human feedback before walking in front of a car. Normally I wave and they wave back. They know I am there.. I can walk.


Humans also can't read humans' body language. A pedestrian waiting at a corner isn't waiting for the weather to change. They are waiting for passing cars to stop, as required by law. But passing cars speed by instead of stopping, unless the pedestrian does a lunge into the street -- preferable a bluff lunge, since most drivers still won't stop, preferring to race the pedestrian to the middle of the lane.


With sufficient data, I'd expect self-driving cars to be better at predicting what such leans mean. Moreover, for every one human driver who notices such a lean, there may be another human driver that doesn't even notice a pedestrian who has already started walking.


For someone so confident in the application of data, you sure just made up some data to support your point.


True, but commuting daily for the past 30 years sure seems to validate it.


This comment seems unnecessarily hostile. But, still, I'll change my previous comment from "there is" to "there may be" so that it's clear that I'm not claiming any data. Just guessing (reasonably, based on experience).


I walk to work every day, and I can assure you that most human drivers have zero awareness of what pedestrians are about to do.

Even on crosswalks. If I just strolled out on a crosswalk without looking, and waiting for drivers who were paying no attention, I’d be long dead.


OT, but I would love to see how self-driving AIs handle something like Vietnam moped traffic and pedestrian crossings. The standard behavior for pedestrians is to walk slowly and keep walking -- stopping, even if it seems necessary to avoid a speeding driver, can be very dangerous, as generally, all of the street traffic is expecting you to go on your continuous path. It's less about reading body language than expectation of the status quo:

https://www.youtube.com/watch?v=nKPbl3tRf_U


Are you sure that supervised learning does not create the same classification capabilities in self driving car AIs?


I'm speaking more of the current state of the self-driving cars I've been in - future improvements will likely narrow the gap or eventually surpass human abilities in most scenarios. How or when is the main question.


Only through this HN thread did I learn that AI is actually barely used in current self-driving car software. This was after I wrote that comment.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: