Hacker News new | past | comments | ask | show | jobs | submit login
MIT study concludes that humans do not over-trust Tesla Autopilot (hcai.mit.edu)
11 points by giacaglia on April 4, 2019 | hide | past | favorite | 2 comments



The original title ("Human Side of Tesla Autopilot") is much better and less click-baity.

The paper's [1] findings are pretty similar to what I've experienced after a couple months of using Autopilot, though. Today's driver-assist systems don't really lower the cognitive load of driving. I spend a little less time keeping the speed correct and staying in my lane, but that means I just spend more time looking for hazards. If anything I find the need to keep wiggling the steering wheel makes it harder for me to stop paying attention than manual driving.

[1] https://hcai.mit.edu/tesla-autopilot-human-side.pdf


The title is misleading. The website mentions all these -

We discuss the limitations and implications of this work in detail in the paper, however, it is important to re-state here that these findings:

1) Cannot be directly used to infer safety as a much larger dataset would be required for crash-based statistical analysis of risk,

2) May not be generalizable to a population of drivers nor Autopilot versions outside our dataset,

3) Do not include challenging scenarios that did not lead to Autopilot disengagement,

4) Are based on human-annotation of critical signals,

5) Do not imply that driver attention management systems are not potentially highly beneficial additions to the functional vigilance framework for the purpose of encouraging the driver to remain appropriately attentive to the road.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: