> if we wait for the gov to set up a certification for this, we'll delay the whole industry 10 years.
That's not a particularly convincing argument, given that (so far), Uber's self-driving cars have a fatality rate of 50 times the baseline, per mile driven[0].
Having to wait an extra ten years to make sure that everything is done properly doesn't sound like the worst price to pay.
In those 10 years ~350,000 people will die in car accidents in the US alone.
Let's say that halving the death rate is what we can reasonably expect from the first generation of self driving cars. Every year we delay that is 15,000 people dead. This woman dying is a personal tragedy for her and those that knew her. However, as society we should be willing to accept thousands of deaths like hers if it gets us closer to safer self driving cars.
> Let's say that halving the death rate is what we can reasonably expect from the first generation of self driving cars.
What's your evidence for why this is a reasonable expectation? The fatalities compared to the amount of miles driven by autonomous vehicles so far shows that this is not possible at the moment. What evidence is there that this will radically improve soon?
Why should we accept those deaths? This is like saying we should let doctors try out surprise untested and possibly fatal therapies on patients during routine check ups if their research might lead to a cure for cancer.
This is a silly interpretation of the data. You can tell because up to now, Uber could've been characterized as having an infinitely better fatality rate than the baseline. Which also would've been a silly thing to say. If a single data point takes you from infinitely better to 50x worse, the correct interpretation is: Not enough data.
> You can tell because up to now, Uber could've been characterized as having an infinitely better fatality rate than the baseline. If a single data point takes you from infinitely better to 50x worse, the correct interpretation is: Not enough data.
No, you couldn't have characterized Uber has having "infinitely better" fatality rate than the baseline, because that would have resulted in a division-by-zero to calculate the standard error. Assuming a frequentist interpretation of probability, of course; the Bayesian form is more complicated but arrives at the same end result.
It's true that the variance is higher when the sample size is lower, but that doesn't change the underlying fact that Uber's fatality rate per mile driven is empirically staggeringly higher than the status quo. Assigning zero weight to our priors, that's the story the data tells.
You're talking statistics. I'm talking common sense. Your interpretation of the data is true, but it isn't honest. As a response to scdc I find it silly.
Nope. Error bars do exist, and with those attached, the interpretation of the data before/after is consistent. Before it was an upper bound, after it is a range. Every driven mile makes the error on it smaller.
That's not a particularly convincing argument, given that (so far), Uber's self-driving cars have a fatality rate of 50 times the baseline, per mile driven[0].
Having to wait an extra ten years to make sure that everything is done properly doesn't sound like the worst price to pay.
[0] Nationwide, we have 1.25 deaths per 100 million miles driven. Uber's only driven about 2 million miles so far: https://www.forbes.com/sites/bizcarson/2017/12/22/ubers-self...