Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You'll have to make an attempt, because I don't see anything contradictory. If human drivers are your null hypothesis, than you cannot use the fact that "humans are bad" as a blanket acceptance of autonomous vehicles. They are already built into the equation, by virtue of being the null hypothesis. So you can argue that hypothesis, but that's not what your comment did. Your comment stated some numbers for the null hypothesis, but had no numbers for the hypothesis under test, and so therefore didn't really mean much at all.

I'd like to note that I am not the person you replied to, and I personally am not arguing that any program should be shut down based on this one incident. But it's certainly not encouraging that the autonomous vehicle seems to have failed a test at which everyone would have expected it to perform well at.

Also, to quote from your previous post:

> Why is it that you see one death from an autonomous car and conclude that autonomous cars aren't ready to be driving, but you see 37,461 deaths from human drivers and don't conclude that humans aren't ready to be driving?

I think we conclude, quite often in fact, that individual humans aren't fit to be driving. Death is one of those scenarios that will quickly lead to such a conclusion.

One huge difference between individual humans and autonomous vehicles is that we can reasonably argue that any Uber vehicle would have performed the same in this scenario. So this is perhaps more akin to saying that this particular driver is not fit for the task, except that this particular driver happens to be driving dozens or more vehicles all at once.



> You'll have to make an attempt, because I don't see anything contradictory. If human drivers are your null hypothesis, than you cannot use the fact that "humans are bad" as a blanket acceptance of autonomous vehicles.

I said, "I admit that there just aren't enough autonomous cars on the road to prove conclusively that autonomous cars are safer than human-operated cars at this point." and I've vocally criticized autonomous cars elsewhere, so I'm not sure where you get the idea that I favor a blanket acceptance of autonomous vehicles.

I'm not saying that autonomous vehicles are safer, I'm saying this isn't evidence that autonomous vehicles are less safe.

> One huge difference between individual humans and autonomous vehicles is that we can reasonably argue that any Uber vehicle would have performed the same in this scenario.

I disagree: you can argue that the autonomous vehicles will behave the same given the same inputs, but they will never have exactly the same inputs even if the situation were identical to a human observer, so that's a fairly moot point. If you step back to a larger description of the situation (car crossing bike lane to get into a turn lane, car doesn't identify and avoid bicyclist in bike lane) then you are going to be looking at a percentage of the time where autonomous car will make a mistake. There's also a percentage of the time where a human driver will make the same mistake. The only way you can compare the safety autonomous cars to human drivers in this situation is to compare those percentages. And that's ignoring the fact that there are thousands of other situations in driving--even if autonomous cars fail 100% of the time in this situation, there may be enough other situations where they perform better enough than human drivers that they're safer. Simply saying that a car made a mistake in this situation doesn't give us any information at all.


> I'm not saying that autonomous vehicles are safer, I'm saying this isn't evidence that autonomous vehicles are less safe.

> And that's ignoring the fact that there are thousands of other situations in driving--even if autonomous cars fail 100% of the time in this situation, there may be enough other situations where they perform better enough than human drivers that they're safer. Simply saying that a car made a mistake in this situation doesn't give us any information at all.

But it does give us information. This incident, along with all the other incidents and non-incidents that do or do not occur, as measured by incidents per car-mile. One cannot simply wish away this one incident and make it disappear. It is now forever part of the statistics which will either prove or disprove the hypothesis of autonomous cars being safer.

> I disagree: you can argue that the autonomous vehicles will behave the same given the same inputs, but they will never have exactly the same inputs even if the situation were identical to a human observer, so that's a fairly moot point.

It's not a moot point. Yes, this exact scenario with these exact parameters only occurred once. However, we can still reasonably argue that if we reversed time and replaced that exact Uber autonomous driver with another instance of the autonomous Uber driver and turned time back on, it would have reacted exactly the same. The same way that I expect the same version of Notepad to open my text file exactly the same on this computer as on another computer. The alternative being that there is some undeterministic behavior in the driver that is not tied to input... In which case, good luck with that in court.

However, we cannot make that same argument by replacing human drivers. Because each human is, in fact, different.

This is only important in the context that a fanciful revocation of Uber's "autonomous driver's license" would apply to all instances of the autonomous driver, since they would all have been reasonably expected to perform the same.


> But it does give us information. This incident, along with all the other incidents and non-incidents that do or do not occur, as measured by incidents per car-mile. One cannot simply wish away this one incident and make it disappear. It is now forever part of the statistics which will either prove or disprove the hypothesis of autonomous cars being safer.

Can you point out the part of the article or the post that I was responding to which mentions how many car-miles were traveled?

This is exactly what I'm pointing out.

> The same way that I expect the same version of Notepad to open my text file exactly the same on this computer as on another computer.

Notepad doesn't have to read your text file through a lens with slightly different focus, viewing area, and patterns of dust on it each time.

> The alternative being that there is some undeterministic behavior in the driver that is not tied to input...

The undeterministic behavior is that the hardware which collects the input will never be the same. I don't know whether the software is non-deterministic (it wouldn't surprise me) but I know the hardware is never going to be identical--hardware is always made to tolerances and always has some degree of variability.

Your claim is tantamount to saying that if we put the same person in the same situation but with two different sets of eyes, the eyes would have no effect on the results.

> However, we cannot make that same argument by replacing human drivers. Because each human is, in fact, different.

Autonomous cars are, in fact, different. Just because they're running the same software doesn't mean they're the same; even if the software is completely deterministic, software is only a component of the autonomous driver.


> Can you point out the part of the article or the post that I was responding to which mentions how many car-miles were traveled?

> This is exactly what I'm pointing out.

I think, if your intent is to show that this is incomplete information, that...

1) No one is arguing that.

2) You have not done a great job of attempting to relay that, given phrases like, "Simply saying that a car made a mistake in this situation doesn't give us any information at all."

3) Sometimes that doesn't matter. For instance, Florida law is a mandatory 6 months to one year license revocation on DUI, regardless of the circumstances or information.

> Notepad doesn't have to read your text file through a lens with slightly different focus, viewing area, and patterns of dust on it each time.

Difficulty of the task is unrelated to expected outcomes of the task given the same inputs. And we already tread the topic of duplicating the exact situation... Not sure what you're trying to gain through this line of argument.

> Your claim is tantamount to saying that if we put the same person in the same situation but with two different sets of eyes, the eyes would have no effect on the results.

I am making no such claim, and I cannot believe that you are so adamant about not understanding my actual claim. This is a hypothetical situation. There is no mention in this scenario about changing the car, including any of the sensor hardware. I am interested in replacing only the driver (or driver software) into the exact same circumstance.

(EDIT: OK, reading back, I did say "any Uber vehicle". While your point stands, I think it's a very uncharitable reading. If hardware sensor tolerances and specs of dust on the camera are going to determine whether a life is lost or not, either those tolerances need to be driven down or this entire idea needs to be rethought. After all, we don't allow those who are legally blind to drive unless they have corrective lenses...)

Assuming the software is deterministic [1], by definition given the same inputs it will result in the same output. Therefore, "replacing" the autonomous driver with another would have resulted in the same incident. You cannot say that with any measure of confidence for any two pairs of human drivers.

[1] Which seems like it would be a good thing to assume, since I don't think one would get much traction by arguing that we should be putting vehicles with non-deterministic behavior on the road...


No human driver will ever receive the same input twice, but we still suspend people's licenses sometimes after a single incident. Are you arguing that we need to let Uber kill a few more pedestrians so we can more accurately determine the safety of their platform vis a vis human drivers? Why can't they fabricate some tests that demonstrate their safety in a controlled environment first?


> Are you arguing that we need to let Uber kill a few more pedestrians so we can more accurately determine the safety of their platform vis a vis human drivers?

Are you making accusations in question form so you don't have to back them up? I certainly didn't say that.

> Why can't they fabricate some tests that demonstrate their safety in a controlled environment first?

Are you assuming they haven't done this?


I'll admit that I'm of the GP's mind in wanting to know what you consider to be "evidence". Because you keep reiterating this sentiment:

> I'm not saying that autonomous vehicles are safer, I'm saying this isn't evidence that autonomous vehicles are less safe.

It's true that we need to wait for more information about this particular incident -- which is exactly the caveat that the commenter you initially responded to had said [0]. But assuming the facts aren't abnormally different than what they seem to be -- an Uber AV hit and killed a jaywalker, how is that not evidence toward the argument that AVs are less safe? It obviously isn't conclusive evidence. But if Uber goes on to kill a pedestrian for every million miles driven, this first data point would surely be part of the empirical evidence, no?

[0] https://news.ycombinator.com/item?id=16620042


> Are you making accusations in question form so you don't have to back them up? I certainly didn't say that.

Nope, I'm just having trouble grokking your argument and I thought that might be it. It's true I added a rhetorical edge to the language that was probably unnecessary. I apologize for that--I'm not trying to put words in your mouth. What is your argument?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: