Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

“The first generations [of autonomous cars] are going to require a driver to intervene at certain points,” Clifford Nass, codirector of Stanford University’s Center for Automotive Research, told me. “It turns out that may be the most dangerous moment for autonomous vehicles. We may have this terrible irony that when the car is driving autonomously it is much safer, but because of the inability of humans to get back in the loop it may ultimately be less safe.”

This reminds of the frame problem in earlier AI[1]. Artificial systems can be built to deal well with a given frame having given specification but they can to the boundary of these frames, they fail to gracefully change their approach.

[1] http://web.media.mit.edu/~minsky/papers/Frames/frames.html

Edit: changed link since the Minsky article is more descriptive. Wikipedia only describes a frame as a data structure but I (and I think Minsky) would see them as a metaphor for the structure and limitation of AI system.



Yes, it is ironic but whilst human operated vehicles are still using the roads it actually requires much more sophisticated autonomous vehicles to 'join in'. Automated vehicles will greatly benefit us be being able communicate their intentions to each other allowing roads to run more freely as continuous adjustments to speed and direction make stopping for traffic virtually obsolete. Now throw an essentially unpredictable human driven vehicle into the mix and you get problems and the onus will be on automated vehicles to deal with this.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: