Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Tangential to your point, do you think it's fair to compare AI driven cars to "average" drivers? I think it's more reasonable to compare them to "average [worst_category_of_driver]." For example, how does the Tesla car compare to teen drivers? If it's better than a meaningfully sized category of human drivers, it's probably ready for the road. Being on the real road is where the best data will be collected.


This seems backwards to me. Shouldn’t we be comparing the performance self-driving cars to the performance of competent human drivers? The variance in human driving ability is quite high, which suggests (to me) that self-driving cars will become better than a significant fraction of human drivers well before they become (what I would consider) safe. I personally don’t want to see any more below-average drivers on the road. It’s true that there’s a constant influx of inexperienced human drivers, many of whom (in my opinion) shouldn’t have been granted licenses, but that’s a separate problem. (I’m talking about the US. I’ve read that licensing requirements are more stringent elsewhere, but here in the states we give out driver’s licenses like candy.)


I think you're confusing the end goal with the path towards that goal. Ultimately, yes, we will want to compare AI driven cars against the best human drivers. But right nwo we're trying to determine if AI driven cars should even be allowed on the road.

The underlying assumption I'm making is that AI driving improvement is accelerated by being on the real road. If that's true, then we want the cars on the road as soon as reasonable possible. Because I would gladly trade hundreds or even thousands of AI driver caused deaths in the short run if I am reasonably convinced that it will prevent the tens of thousands of human driver caused deaths every year. And I am convinced of that. I also acknowledge I'm in the pool of people who might be killed by the AI driver. Just as we let teen drivers on the road with the expectation they will improve over several years, so should we accept a similar risk from AI drivers. The return is far better for the "inexperienced" AI driver because the AIs will continually improve forever, but as you noted we get a new batch of bad human drivers every day.


That’s a fair point. From a utilitarian perspective, I think you’re probably right. Unfortunately, the public will not follow this dispassionate line of reasoning if AI drivers start killing people in “large” numbers, even if there’s strong evidence to support its soundness. It might therefore be best to be just a little less aggressive than would otherwise be optimal, to avoid a public backlash that could—despite being irrational—delay the arrival of competent AI drivers.


only if you are exclusively giving the Teslas to that smaller, worse, category of driver


The point is that we allow teen drivers on the road with the expectation and understanding that they will improve over several years. Why would we not expect to have to make the same concession for AI?


Because they are companies that sell a product to consumers, who expect that this will keep them safe on the road. We need to hold them to a much higher standard than a teenager.


I feel like this position is too risk averse. Fear of a few hundred AI driver deaths will result in hundreds of thousands more human driver deaths.


If a human drives sufficiently poorly they will eventually lose their right to drive on public roads for a time.

Could this/should this also apply for autonomous vehicles? If so, how?


Sure. I'm open to suggestions, but I'll offer financial liability for damages as an opening bid.


I suspect that the corporate entities involved have such deep pockets and/or so many lawyers and lobbyists that won't work.

Instead, how about:

All new autonomous vehicle configurations (let's call that the algos + sensors + vehicle) have to take some kind of actual driving test, just like us humans do.

Maybe the public could even help design a good test? "Not driving at speed into a stationary fire truck which is parked on the highway right in front of you" would be one element I'd want to see tested.

If an autonomous vehicle is involved in an accident, and the algo/sensors/vehicle are found to be (partially) at fault the configuration earns penalty points.

If that configuration earns enough penalty points over a period of time, the entire configuration loses its certification, plus a fine, plus a mandatory re-test.

This method appears to work reasonably well in dealing with us not-always-perfect human drivers, and ought to concentrate the minds of the designers/developers/managers behind autonomous vehicles.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: