I hand you a gun and say "you can point this gun at anyone, pull the trigger, and it wont harm them." You point the gun at a crowd, pull the trigger, it goes off and kills someone.
Who do you blame?
No one is saying guns, or cars, aren't dangerous. However this kind of false advertising leads people to use cars in dangerous ways while believing they are safe because they've been lied to.
(Side note, there is some fine print you never read that says the safe-gun technology only works when you point the gun at a single person and doesn't work on crowds.)
It means this will kill people but fewer people than humans would and they have actual data that backs this assertion up.
The benchmark is not an alert, cautious, and sober driver because that's often not how actual people drive. So right now it's often safer to drive other times it really is safer to use autopilot, net result fewer people die.
Autopilot should not be accessible to drivers until their driving habits have been assessed and baselined. If autopilot is known to drive more safely than the individual human driver in environments X, Y, and Z then it should be made available in those environments, if not encouraged. That might not make an easy sell since a major value prop is inaccessible to many of the people who really want to use it, but it's the most reasonable, safest path.
I also imagine that cars will learn from both human and each others' driving patterns over time, which (under effective guidance) should enable an enormous improvement in a relatively short period of time.
I can grab the data, but the surprising thing about fatal crashes are the typical circumstance. It's an experienced driver driving during the day, in good weather, on a highway, in a sedan style car, sober. There are two ways to interpret this data. The first is we assume that crashes are semi-randomly distributed, so since this is probably the most typical driving condition it therefore naturally follows that that's where we'd expect to see the most fatalities.
However, I take the other interpretation. I don't think crashes are randomly distributed. That that scenario is absolutely perfect is really the problem because of humans. All a crash takes is a second of attention lapse at a really bad time. And in such perfect circumstances we get bored and take road safety for granted as opposed to driving at night or navigating around some tricky curves. And that's a perfect scenario to end up getting yourself killed in. And more importantly here, that's also the absolutely perfect scenario for self driving vehicles who will drive under such conditions far better than humans simply because it's extremely trivial, but they will never suffer for boredome or lack of attention.
Just about every expert agrees though that their hardware is NOT capable without LiDAR (which is probably why they churn through people running their Autopilot program), although proving that in court is a whole thing.
Or maybe there is no basis for making such a claim.
Or well, I just put four cameras on my car and inserted their USB wires into an orange. I declare that this is enough for FSD now. I don't have to prove anything for such an absurd statement?
And how do you know that the present hardware is enough to deliver full self-driving capability? I don't think anyone knows what precise hardware is required at this point as it wasn't achieved yet.
So, do you think it may be possible that people do understand the distinction and are STILL not convinced?
I, and the law would blame the person. They should have no particular reason to believe your claim and absolutely no reason to perform a potentially lethal experiment on people based on your claim.
You may be breaking the law as well, depending on the gun laws in your area, but as I understand it (IANAL) the manslaughter charge of entirely on the shooter.
I would blame the person, because the best case scenario of shooting an allegedly harmless gun into a crowd is equivalent in effect to the worst case scenario of doing nothing in the first place.
One person died from that accident. There are now at least 4 deaths where Tesla's autopilot was involved (most of the deaths in China don't get much publicity, and I wouldn't be surprised if there are more). And the statistics do not back up your claim that Tesla is safer (despite their attempts to spin it that way).
No, the NHTSA report says nothing about how Tesla's autopilot compares to human driving. Here are two comments I made last week about this:
In that study there are two buckets, one which is total Tesla miles in TACC-enabled cars and then after the update total Tesla miles in cars with TACC + Autosteer and they calculated on airbag deployments. Human driven miles are going to dominate both of those buckets and there's a reason the NHTSA report makes zero claims about Tesla's safety relative to human drivers. It's totally outside the scope of the study. Then add in that some researchers who are skeptical of the methodology and have been asking for the raw data from NHTSA/Tesla have yet to receive it.
Autosteer, however, is relatively unique to Tesla. That’s what makes singling out Autosteer as the source of a 40 percent drop so curious. Forward collision warning and automatic emergency braking were introduced just months before the introduction of Autosteer in October 2015. A previous IIHS study shows that both the collision warning and auto emergency braking can deliver a similar reduction in crashes.
I'm not sure what "safer" means. Not sure what posting your insistence in other threads has to do with this. 10x or 100x the number of deaths would be small price to pay to get everyone in autonomous vehicles. It's air travel, all over again and there's a price to pay.
You can define it however you want. By any common sense definition of safety Tesla has not proven that their autopilot is 'safer' than a human driver.
>10x or 100x the number of deaths would be small price to pay to get everyone in autonomous vehicles.
This supposes a couple things. Mainly that autonomous vehicles will become safer than human drivers, that you know roughly how many humans will have to die to achieve that and that those humans have to die to achieve it. Those are all unknown at this point and even if you disagree about the first one (which I expect you might) you still have to grant me two and three.
Ignoring Tesla, self driving cars will almost definitely be safer. People die in cars constantly. People don't have to die if companies don't rush the tech to market like Tesla plans to. To be fair, I blame the drivers, but I still think the aggressive marketing pretty much guaranteed someone would be stupid like that, so Tesla should share some of the blame as well.
I don't think we are opposed to autonomous vehicles.
Can Autopilot not run passively and prevent human errors? Why are the two options presented seem to be only "only a human behind the wheel, not assisted by even AEB" and "Full autopilot with no human in the car at all".
> Do you know how many people die in normal, non-self-driving cars
false equivalency. you need at least compare per mile, at best by driver demographic since most tesla drivers are in the high income and maybe in the safety conscious bracket
If those people die because the car fails catastrophically, and predictably, rather than the usual reasons it would be news. This is not about 100% or “the perfect is the enemy of the good” or any bullshit Utopianism. This is about a company marketing a flawed product deceptively for money, while letting useful idiots cover their asses with dreams of level 5 automation that aren’t even close to the horizon.
Do you know how many people die in normal, non-self-driving cars? It's never possible to get 100% accuracy.