Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

We already have plenty of case law and policy for cases where people are killed by mechanical equipment operated by businesses, and the type of penalties and compensation are appropriately different if the cause was negligence, malice, or impossible-to-eliminate fluke events. (Generally in the latter case, compensation is due but there are no criminal charges.) Obviously there will be policy adjustments and clarifications for the case of self-driving cars, but I don't think there's reason to think we can't apply normal and existing legal principles here.



There is a massive difference in terms of scale and choice (FWIW). Industrial automation is most likely to kill you if you work in the plant. The person who died here was a random pedestrian. If these cars were restricted to special areas the analogy might make more sense, but I don’t expect to be dealing with a self-driving car or an industrial robot when I step outside my front door.

Moreover it is not clear to me that not holding the companies that create industrial robots that kill people criminally responsible is what most people would consider just. Again, I think it’s just that there is a massive difference in the scale of exposure; there were not enough interested people to have a debate.


cars are already machines built by companies which sometimes malfunction and kill people (both drivers of the vehicles and people around them), this is just a new way in which they can malfunction, I don't think it's as dramatically different as you're saying


You’re right that there are already ways in which non-self-driving cars can malfunction. But previously we held human drivers responsible for certain kinds of accidents. For these same kinds of accidents we now propose holding no one responsible. That seems to be the dramatic change to me.

We have held humans responsible because assuming a correctly functioning car they are performing the most complex and risky task, and are most able to cause problems. Likewise self-driving car software performs a complex and risky task in which failure can have serious consequences.


There's already such a thing as a no-fault collision. There's also already such a thing as a collision where the manufacturer is at fault. I feel like this stuff is all covered in driver's ed.


And there is such a thing as an at-fault collision. Is what you are saying supposed to be a contradiction? Also, I have a license and drive regularly; I don’t see how your strange assertion I must not is productive.


Right now we have something of a two tier level of liability which would for the most part work fine with automated vehicles. The primary liability falls on the owner/operator, who usually carries insurance. The owner/operator has some level of self interest in maintenance of the vehicle - otherwise an automated vehicle might have a perfect design, but the maintainer never changes the brake pads or operates with the tires worn, etc. If the insurance company finds that some model vehicle has reason to doubt it's design integrity, then that liability may be passed on in a separate case to the manufacturer. An individual owner is actually in a poor position to know systematically if there is reason to bring suit over a subtle design or manufacturing defect, but an auto insurance company has both data and the resources to see and react to defects.


> If these cars were restricted to special areas the analogy might make more sense, but I don’t expect to be dealing with a self-driving car or an industrial robot when I step outside my front door.

As a pedestrian you already run a significant risk of being killed by a car. To the extent that we hold autonomous car makers responsible for these deaths (and I'm not saying we shouldn't), we should hold non-autonomous car makers responsible for the deaths their vehicles cause as well.


We do hold non-self-driving car makers responsible for bad manufacturing. But in accidents not due to manufacturing we primarily hold the human drivers responsible. I agree with you overall, but the problem is that people seem overeager to hold no one responsible at all, sometimes based solely on a blind faith that self-driving cars will be safer than humans soon, and that the deaths along the way are just the price we will have to pay—as if there is no other option between no self driving cars at all, and the “move fast and break things” attitude that here resulted in a person’s death.


> and the “move fast and break things” attitude that here resulted in a person’s death

Slow your roll. Nobody know why this person died yet.


The thing to remember is that limiting self-driving cars is not safe either. Human-driven cars kill thousands of people every day; a policy that saved this person's life but set back self-driving car development by even (say) a month might well do more harm than good.


lmm the data does not support your claim, see gpm's comment above.


Airplane (and car, for that matter) malfunctions can already kill travelers. Why not apply existing principles from those types of cases?


Because those vehicles have licensed human operators. The malfunctions may be to blame on the manufacturer, but are also licensed and regulated. The cars have to pass certain crash test standards for example.

In this case, the operator was an AI that was negligent and it was unlicensed/unregulated. That's a new scenario. In the human case a person might go to jail for negligent vehicular manslaughter. What does 2 years of jail time look like to an AI? What does a suspended license look like to an unlicensed entity?


I’m specifically talking about the case where the operator is not at fault.


For choice: manufacturer failures happen with normal cars, and you risk that every time you step outside your door. Likewise with building failures, construction accidents, etc.

For scale: the risk of death from a self driving car will probably be less than the current risk of death from normal cars, and will definitely be less than the risks incurred in the 20th century from cars, buildings, etc.

Self-driving cars are definitely a new and large legal development, but there's no reason to think existing legal principles can't handle them.


No, this is not equivalent to the risk of existing manufacturing defects in cars. Car bodies undergo safety tests by the government; the software for these self-driving cars is being tested on public streets. Same with buildings, which must be inspected.

As the GP states, the entire reason Uber is testing in Arizona is because their state government completely got rid of reporting regulations which were present in CA; the status quo is decidedly not the same as it is for established technologies.

As for scale, look at the other comments where people analyze the risk posed by self driving cars. Your assumption that the risk of death from self-driving cars is less is not backed up by the evidence.

It’s fine to say that self-driving cars might eventually be better drivers than humans, just like robots might eventually be better at conversing than humans.

There is no reason self-driving cars can’t be be tested in private. Uber can hire pedestrians to interact with them—I don’t volunteer to be their test subject by deciding to take a walk.


First you started by claiming the difference was due to scale and choice. You're now retreating to a third distinction: the difference between established technology and experimental technology. Well, all established technology was experimental technology at one point, and it was not uniformly regulated. We could play this game all day.

Self-driving cars are a new and important industrial development that will require adjustments to policy. They don't require revolutionary new legal principles.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: