The goal is to use the best set of information available to us. I merely cited the normalized numbers because it's been asked various times in this thread - questions along the lines of "how does this rate compare with human drivers?"
The purpose of the extrapolation was to get a (flawed) approximation to that answer. By itself, it doesn't say much, but all we can do is parse the data points available to us:
- Uber's death rate after approximately 3 million self-driven miles is significantly higher than the national average, and probably comparable to drunk drivers.
- Public reporting around the Uber's self-driving program suggests a myriad of egregious issues - such as running red lights.
- The company has not obeyed self-driving regulations in the past, in part because they were unwilling to report "disengagements" to the public record.
- The company has a history of an outlier level of negligence and recklessness in other areas - for example, sexual harassment.
But this is precisely why you should simply extrapolate. Of course people ask, and of course the answer will be useful. But extrapolating one figure of 3M miles to a typical measure (per 100M) is not useful because it provides no actionable information.
Providing this likely wrong number anchors a value in people’s minds.
It’s actually worse than saying “we don’t know the rate compared to human drivers because there’s not enough miles driven.”
Your other points are valid but don’t excuse poor data methods hygiene.
Even now you are making baseless data on its face because you don’t know the human fatality rate per 3M enough to say is “significantly higher.” Although I think it’s easier to find enough data from the human driver data to match similar samples to Uber. But dividing by 33 is not sufficient to support your statement.
I haven’t seen data on the public reporting. That seems interesting and would appreciate it if you can link to it.
> the self-driving car was, in fact, driving itself when it barreled through the red light, according to two Uber employees, who spoke on the condition of anonymity because they signed nondisclosure agreements with the company, and internal Uber documents viewed by The New York Times. All told, the mapping programs used by Uber’s cars failed to recognize six traffic lights in the San Francisco area. “In this case, the car went through a red light,” the documents said.
The purpose of the extrapolation was to get a (flawed) approximation to that answer. By itself, it doesn't say much, but all we can do is parse the data points available to us:
- Uber's death rate after approximately 3 million self-driven miles is significantly higher than the national average, and probably comparable to drunk drivers.
- Public reporting around the Uber's self-driving program suggests a myriad of egregious issues - such as running red lights.
- The company has not obeyed self-driving regulations in the past, in part because they were unwilling to report "disengagements" to the public record.
- The company has a history of an outlier level of negligence and recklessness in other areas - for example, sexual harassment.