Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Perhaps companies need to test their safety devices first. I.e., first prove that their LiDAR correctly identifies pedestrians, cyclists, etc. From there, build test-vehicles with redundancy, e.g. with multiple LiDAR devices. Then prove that the vehicles actually stop in case of emergency. And only then actually hit the road.

Of course, the US department of transportation should have set up proper certification for all of this. They could have easily done so because they can arbitrarily choose the certification costs.




What you're describing is a driving test that every human needs to go through before they're allowed to drive on public roads. Something that can be revoked temporarily or permanently.

I would be very interested in a 3rd party (government or private) create a rigorous test (obstacles, weather conditions, etc...) for self driving vehicles. Becoming "XXX safety Certified with a Y Score" for all major updates to the AI could help restore confidence in the system and eliminate bad actors.


How about if we start with a test no human driver is given:

Identify the objects in pictures.

We take our biological vision systems for granted, but it seems one autopilot system couldn't identify a semi crossing in front of the vehicle...


In some countries you have to pass a medical examination which includes a vision test.

The driving schools also have theoretical tests where one has to identify objects in a picture, interpret the situation, and propose the correct action. Of course, these tests are on a higher level: "this is a car, this is a pedestrian" vs. "you're approaching an intersection, that car is turning left, a pedestrian is about to cross the street, another car is coming from there etc."

Not to mention the road and track tests a driver has to pass which include practicing controlling the car in difficult conditions: driving in the dark, evasive actions on slippery surfaces and so on.

Edit: In my opinion it's insane to allow autonomous vechiles on the roads without proper testing by a neutral third party.


>the US department of transportation should have set up proper certification for all of this

I think you're severely underestimating the path that something like this would have to take. The certification itself would be under so much scrutiny and oversight that it would take years for that to get done. Unfortunately, the technology is far more readily available and easy to get working than the political capital required to create a certification for this.


if we wait for the gov to set up a certification for this, we'll delay the whole industry 10 years.


And?


Cost 1000s if not millions of lives. You understand over a million people die every year due driving? They system is not working.


A "million" people do not die in the U.S. every year from driving. Not even close:

https://en.wikipedia.org/wiki/Motor_vehicle_fatality_rate_in...

Not that 37,000+ is a great number, but I don't think many of the detractors here are arguing that Uber et. al have a perfect record. Just that it's possible that progress is being made in a more reckless way than necessary. Just because space flight is inherently difficult and risky and ambitious doesn't mean we don't investigate the possibly preventable factors behind the Challenger disaster.

edit: You seem to be referencing the worldwide estimate. Fair, but we're not even close to having self-driven cars in the most afflicted countries. Nevermind AI, we're not even close to having clean potable water worldwide, and diarrhea-related deaths outnumber road accident deaths according to WHO: http://www.who.int/mediacentre/factsheets/fs310/en/


Yeah but the tech will spread there fairly soon after it's established in the US. In places like Africa the most common cars are not some African brand, they seem to mostly be Toyotas, who will probably implement self driving when it's proven.


For what value of “soon after” is very expensive automation going to reach Africa, India, and other places in numbers sufficient to put a dent in those fatalities? The slow march of other tech, safety included, suggests decades. Meanwhile the safety gains of automation are so far hypothetical, amd until they’re well demonstrated, potentially a distant pipe dream. Nothing about ML/AI today suggests a near-future of ultra-safe cars.


Wow, let's just put people in bubble suits so they don't hurt themselves. It's ridiculous to say people shouldn't drive cars because it's possible to hurt themselves or others. We might as well outlaw pregnancy for all the harm that can come to people as a result of being born.


> if we wait for the gov to set up a certification for this, we'll delay the whole industry 10 years.

That's not a particularly convincing argument, given that (so far), Uber's self-driving cars have a fatality rate of 50 times the baseline, per mile driven[0].

Having to wait an extra ten years to make sure that everything is done properly doesn't sound like the worst price to pay.

[0] Nationwide, we have 1.25 deaths per 100 million miles driven. Uber's only driven about 2 million miles so far: https://www.forbes.com/sites/bizcarson/2017/12/22/ubers-self...


In those 10 years ~350,000 people will die in car accidents in the US alone.

Let's say that halving the death rate is what we can reasonably expect from the first generation of self driving cars. Every year we delay that is 15,000 people dead. This woman dying is a personal tragedy for her and those that knew her. However, as society we should be willing to accept thousands of deaths like hers if it gets us closer to safer self driving cars.


> Let's say that halving the death rate is what we can reasonably expect from the first generation of self driving cars.

What's your evidence for why this is a reasonable expectation? The fatalities compared to the amount of miles driven by autonomous vehicles so far shows that this is not possible at the moment. What evidence is there that this will radically improve soon?


Why should we accept those deaths? This is like saying we should let doctors try out surprise untested and possibly fatal therapies on patients during routine check ups if their research might lead to a cure for cancer.


This is a silly interpretation of the data. You can tell because up to now, Uber could've been characterized as having an infinitely better fatality rate than the baseline. Which also would've been a silly thing to say. If a single data point takes you from infinitely better to 50x worse, the correct interpretation is: Not enough data.


> You can tell because up to now, Uber could've been characterized as having an infinitely better fatality rate than the baseline. If a single data point takes you from infinitely better to 50x worse, the correct interpretation is: Not enough data.

No, you couldn't have characterized Uber has having "infinitely better" fatality rate than the baseline, because that would have resulted in a division-by-zero to calculate the standard error. Assuming a frequentist interpretation of probability, of course; the Bayesian form is more complicated but arrives at the same end result.

It's true that the variance is higher when the sample size is lower, but that doesn't change the underlying fact that Uber's fatality rate per mile driven is empirically staggeringly higher than the status quo. Assigning zero weight to our priors, that's the story the data tells.


You're talking statistics. I'm talking common sense. Your interpretation of the data is true, but it isn't honest. As a response to scdc I find it silly.


Nope. Error bars do exist, and with those attached, the interpretation of the data before/after is consistent. Before it was an upper bound, after it is a range. Every driven mile makes the error on it smaller.


Under 50 times. Still horrible, of course.

https://www.androidheadlines.com/2017/12/ubers-autonomous-ve...

Presumably a bit more since December.

https://en.wikipedia.org/wiki/Motor_vehicle_fatality_rate_in...

Fluctuates just over 1 per 100 million.

(1/2+ million) / (1+/100 million) ~ 50-


You've hit upon one of the most obvious ways to improve the safety of these systems. Deploy the systems more broadly, without giving them active control.

Then, you can start to identify situations where the driver's actions were outside of a predicted acceptable range, and investigate what happened.

Additionally, if you have a large pool of equipped vehicles you can identify every crash (or even more minor events, like hitting potholes or road debris) and see what the self-driving system would have done.

The realistic problem is that Uber doesn't give a shit. As such, deployment will never be optimized for public safety. It will be optimized for Uber's speed to market.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: