Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is a silly interpretation of the data. You can tell because up to now, Uber could've been characterized as having an infinitely better fatality rate than the baseline. Which also would've been a silly thing to say. If a single data point takes you from infinitely better to 50x worse, the correct interpretation is: Not enough data.



> You can tell because up to now, Uber could've been characterized as having an infinitely better fatality rate than the baseline. If a single data point takes you from infinitely better to 50x worse, the correct interpretation is: Not enough data.

No, you couldn't have characterized Uber has having "infinitely better" fatality rate than the baseline, because that would have resulted in a division-by-zero to calculate the standard error. Assuming a frequentist interpretation of probability, of course; the Bayesian form is more complicated but arrives at the same end result.

It's true that the variance is higher when the sample size is lower, but that doesn't change the underlying fact that Uber's fatality rate per mile driven is empirically staggeringly higher than the status quo. Assigning zero weight to our priors, that's the story the data tells.


You're talking statistics. I'm talking common sense. Your interpretation of the data is true, but it isn't honest. As a response to scdc I find it silly.


Nope. Error bars do exist, and with those attached, the interpretation of the data before/after is consistent. Before it was an upper bound, after it is a range. Every driven mile makes the error on it smaller.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: