Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What will the consequences of this person’s killing be? Will someone lose out on a promotion, or miss their performance bonus?

We need to discuss how the developers self-driving cars will be held accountable for the crimes they commit. There is no reason the person who programs or the person who makes money from a self-driving car should be held less accountable for a crime that if committed directly by a person would almost certainly result in jail time. You can’t replace yourself with a computer program and then choose to take only the benefits and not the responsibilities.



We already have plenty of case law and policy for cases where people are killed by mechanical equipment operated by businesses, and the type of penalties and compensation are appropriately different if the cause was negligence, malice, or impossible-to-eliminate fluke events. (Generally in the latter case, compensation is due but there are no criminal charges.) Obviously there will be policy adjustments and clarifications for the case of self-driving cars, but I don't think there's reason to think we can't apply normal and existing legal principles here.


There is a massive difference in terms of scale and choice (FWIW). Industrial automation is most likely to kill you if you work in the plant. The person who died here was a random pedestrian. If these cars were restricted to special areas the analogy might make more sense, but I don’t expect to be dealing with a self-driving car or an industrial robot when I step outside my front door.

Moreover it is not clear to me that not holding the companies that create industrial robots that kill people criminally responsible is what most people would consider just. Again, I think it’s just that there is a massive difference in the scale of exposure; there were not enough interested people to have a debate.


cars are already machines built by companies which sometimes malfunction and kill people (both drivers of the vehicles and people around them), this is just a new way in which they can malfunction, I don't think it's as dramatically different as you're saying


You’re right that there are already ways in which non-self-driving cars can malfunction. But previously we held human drivers responsible for certain kinds of accidents. For these same kinds of accidents we now propose holding no one responsible. That seems to be the dramatic change to me.

We have held humans responsible because assuming a correctly functioning car they are performing the most complex and risky task, and are most able to cause problems. Likewise self-driving car software performs a complex and risky task in which failure can have serious consequences.


There's already such a thing as a no-fault collision. There's also already such a thing as a collision where the manufacturer is at fault. I feel like this stuff is all covered in driver's ed.


And there is such a thing as an at-fault collision. Is what you are saying supposed to be a contradiction? Also, I have a license and drive regularly; I don’t see how your strange assertion I must not is productive.


Right now we have something of a two tier level of liability which would for the most part work fine with automated vehicles. The primary liability falls on the owner/operator, who usually carries insurance. The owner/operator has some level of self interest in maintenance of the vehicle - otherwise an automated vehicle might have a perfect design, but the maintainer never changes the brake pads or operates with the tires worn, etc. If the insurance company finds that some model vehicle has reason to doubt it's design integrity, then that liability may be passed on in a separate case to the manufacturer. An individual owner is actually in a poor position to know systematically if there is reason to bring suit over a subtle design or manufacturing defect, but an auto insurance company has both data and the resources to see and react to defects.


> If these cars were restricted to special areas the analogy might make more sense, but I don’t expect to be dealing with a self-driving car or an industrial robot when I step outside my front door.

As a pedestrian you already run a significant risk of being killed by a car. To the extent that we hold autonomous car makers responsible for these deaths (and I'm not saying we shouldn't), we should hold non-autonomous car makers responsible for the deaths their vehicles cause as well.


We do hold non-self-driving car makers responsible for bad manufacturing. But in accidents not due to manufacturing we primarily hold the human drivers responsible. I agree with you overall, but the problem is that people seem overeager to hold no one responsible at all, sometimes based solely on a blind faith that self-driving cars will be safer than humans soon, and that the deaths along the way are just the price we will have to pay—as if there is no other option between no self driving cars at all, and the “move fast and break things” attitude that here resulted in a person’s death.


> and the “move fast and break things” attitude that here resulted in a person’s death

Slow your roll. Nobody know why this person died yet.


The thing to remember is that limiting self-driving cars is not safe either. Human-driven cars kill thousands of people every day; a policy that saved this person's life but set back self-driving car development by even (say) a month might well do more harm than good.


lmm the data does not support your claim, see gpm's comment above.


Airplane (and car, for that matter) malfunctions can already kill travelers. Why not apply existing principles from those types of cases?


Because those vehicles have licensed human operators. The malfunctions may be to blame on the manufacturer, but are also licensed and regulated. The cars have to pass certain crash test standards for example.

In this case, the operator was an AI that was negligent and it was unlicensed/unregulated. That's a new scenario. In the human case a person might go to jail for negligent vehicular manslaughter. What does 2 years of jail time look like to an AI? What does a suspended license look like to an unlicensed entity?


I’m specifically talking about the case where the operator is not at fault.


For choice: manufacturer failures happen with normal cars, and you risk that every time you step outside your door. Likewise with building failures, construction accidents, etc.

For scale: the risk of death from a self driving car will probably be less than the current risk of death from normal cars, and will definitely be less than the risks incurred in the 20th century from cars, buildings, etc.

Self-driving cars are definitely a new and large legal development, but there's no reason to think existing legal principles can't handle them.


No, this is not equivalent to the risk of existing manufacturing defects in cars. Car bodies undergo safety tests by the government; the software for these self-driving cars is being tested on public streets. Same with buildings, which must be inspected.

As the GP states, the entire reason Uber is testing in Arizona is because their state government completely got rid of reporting regulations which were present in CA; the status quo is decidedly not the same as it is for established technologies.

As for scale, look at the other comments where people analyze the risk posed by self driving cars. Your assumption that the risk of death from self-driving cars is less is not backed up by the evidence.

It’s fine to say that self-driving cars might eventually be better drivers than humans, just like robots might eventually be better at conversing than humans.

There is no reason self-driving cars can’t be be tested in private. Uber can hire pedestrians to interact with them—I don’t volunteer to be their test subject by deciding to take a walk.


First you started by claiming the difference was due to scale and choice. You're now retreating to a third distinction: the difference between established technology and experimental technology. Well, all established technology was experimental technology at one point, and it was not uniformly regulated. We could play this game all day.

Self-driving cars are a new and important industrial development that will require adjustments to policy. They don't require revolutionary new legal principles.


This is crazy. The developers likely have no say in where and when the cars go out on public roads. That's obviously a decision for someone higher up in the company.

The executives should be held accountable, not the developers.


I disagree, you're just passing the buck. Accountability needs to be had at all levels. If an engineer writes a bug into code like this (deliberate or not) and such a bug results in somebody's death, the engineer should be held accountable just as much as the person who approved its release. The executive could just as easily say "my engineers promised me it was fully tested", etc. Engineers could say "yep it was, but that was an edge case we missed" or something like that. In any case, there needs to be shared accountability. Maybe execs take the brunt, but engineers should not be allowed to write code that kills people (inadvertently or otherwise) and face zero consequences.


What software developer would ever sign on to a project where they could be held criminally liable for a single bug?

Do you want software development to turn into healthcare, where every developer needs millions of dollars of malpractice insurance? Because shit like this will turn it into a healthcare like system real quick.


Criminal liability is a different situation as there are very few industries with specific criminal liabilities (finance maybe).

But there are many industries where civil liabilities are required. In fact, any software independent consultant is civilly liable for their work, but it’s not specific to software.

IEEE has a section in their member toolkit that goes into why professional liability insurance is needed, https://m.ieee.org/membership_services/membership/discounts/...

The costs aren’t that high or at least they weren’t 15 years ago when I purchased it for less than $1k/year for $1M in coverage. Most people need this even if they think they are safe. If you’re the one who wrote the deployment script that erased $1M in data, it won’t be entirely mitigated that the script made it through qa.

Also interesting is that the engineer who wrote the Uber software is currently liable for criminal negligence, like pretty much everyone else. But you would have to prove culpability. I can’t find any examples of software engineers convicted so it’s hard to tell who goes to jail-developer, qa, or executive.

More info on criminal/civil negligence- https://www.theblanchlawfirm.com/?practice-areas=criminal-ne...


Nobody in their right mind would work with such liability without insurance, which is all well and good for civil liability, but insurance won't help if you're going to jail.


I think we may be arguing different things.

Almost all employees have the possibility of criminal negligence based on their work. For programmers, this could mean that if you fuck up the code for a pacemaker and someone dies, you could go to jail. That’s a big risk and I can’t find any programmer who has been found culpable for someone’s death. This is the current law in the US.

If Uber was negligent in its code, then the programmers could go to jail. They have programmers and they work and assume this extremely low risk.

Now maybe you’re arguing that some special law should or should not exist for Uber drivers.


This happens all the time in aerospace. You need to sign off the software personally and you need to be an accredited engineer to be allowed to do that.


If the only options you’re presenting are “move fast and break things” where those things are human lives, or introducing burdensome bureaucracy, I’ll take the bureaucracy. Time and again society chose that latter option, and it will again. Unaccountablility is worse than regulation, and history has shown that repeatedly.


This is quite the strawman is it not? I said nothing about "move fast and break things."


What software developer would ever sign on to a project where they could be held criminally liable for a single bug? Do you want software development to turn into healthcare, where every developer needs millions of dollars of malpractice insurance? Because shit like this will turn it into a healthcare like system real quick.

How else to interpreted that? When a single bug can cause loss of life, and given that this in a thread about Uber, it’s hard to draw other conclusions. By all means though, offer another perspective on how regulating industries with significant number of lives on the line can’t manage regulation. While you’re doing that, I’d point to the aerospace sector which seems capable of both innovation and regulation.


There's a difference between holding someone criminally responsible for a bug in code that they wrote, and some sort of regulation. They are not the same.

For example: https://en.wikipedia.org/wiki/Boeing_737_rudder_issues


Bad example for two reasons. First:

Although the NTSB investigated the accident, it was unable to conclusively identify the cause of the crash. The rudder PCU from Flight 585 was severely damaged, which prevented operational testing of the PCU.[3]:47 A review of the flight crew's history determined that Flight 585's captain strictly adhered to operating procedures and had a conservative approach to flying.[3]:47 A first officer who had previously flown with Flight 585's captain reported that the captain had indicated to him while landing in turbulent weather that the captain had no problem with declaring a go-around if the landing appeared unsafe.[3]:48 The first officer was considered to be "very competent" by the captain on previous trips they had flown together.[3]:48 The weather data available to the NTSB indicated that Flight 585 might have encountered a horizontal axis wind vortex that could have caused the aircraft to roll over, but this could not be shown conclusively to have happened or to have caused the rollover.[3]:48–49

On December 8, 1992, the NTSB published a report which identified what the NTSB believed at the time to be the two most likely causes of the accident. The first possibility was that the airplane's directional control system had malfunctioned and caused the rudder to move in a manner which caused the accident. The second possibility was a weather disturbance that caused a sudden rudder movement or loss of control. The Board determined that it lacked sufficient evidence to conclude either theory as the probable cause of the accident.[2]:ix[3]:49 This was only the fourth time in the NTSB's history that it had closed an investigation and published a final aircraft accident report where the probable cause was undetermined.[4]

Second:

In 2004, following an independent investigation of the recovered PCU/dual-servo unit, a Los Angeles jury, which was not allowed to hear or consider the NTSB's conclusions about the accident, ruled that the 737's rudder was the cause of the crash, and ordered Parker Hannifin, a rudder component manufacturer, to pay US$44 million to the plaintiff families.[16] Parker Hannifin subsequently appealed the verdict, which resulted in an out-of-court settlement for an undisclosed amount.


You interpret it as written, which is that holding developers routinely criminally liable for bugs is going to have very negative effects. One of them is that the only developers you'll get are precisely those too unwise to realize what an incredibly stupid deal that is, no matter what the pay rate is. I don't think I'd like to see all my critical software written by such "unwise developers".

I have no problem "piercing the veil" for egregious issues. I'd have no problem holding a developer liable for failing to secure a project but just continuing on rather than quit. But "Let's just hold all the engineers criminally liable all the time!" is a bad idea and it is not already done for a reason.


It’s not done because software development is an unregulated shitshow full of wildly unethical companies scrambling for the bottom. It’s not unlike early aerospace, or early medicine, or any frontier which develops rapidly before legal frameworks inevitably close in.


It's also not done because it is mathematically impossible to certify software as 'bug-free' in the general case.

Software isn't like civil engineering where you can mathematically prove that a design is sound.


This is not true at all. First of all, there's no such thing as being able to mathematically prove a design is sound in any engineering discipline, software or non-software. After all, it is infeasible if not impossible to encapsulate all the details of the implementation of _any_ system in mathematics or any other system of reasoning (down to every last atom, if you stretch your imagination).

All we have in engineering (non-software) is something like safety factors and confidence, and this is done with (usually) rigorous mathematical models as well as loads and loads of testing to fill in the gaps of mathematics (think unknown constant/parameters, assumptions, etc).

None of this is impossible to do for software. There are systems that enable one to do easy/entry level verification (such as something like TLA+), to much more complicated reasoning (something like COQ). This will allow the system designers to gain confidence in if the system will work and gain understanding about under what scenario they will fail. Contrast this with the existing software landscape, which is mostly, at least from my perspective, just let me write some stuff until things do approximately what I want. Even at the top of the ladder, I feel the tests conducted are "adhoc" at best and with none of the rigours that you associate with traditional engineering fields.


Healthcare costs are not unreasonable in much of the developed and developing world. Most countries have better outcomes and lower costs than here in the US. As another commentor says, healthcare seems to be doing fine; you seem to be assuming the US is the norm when it isn’t.


> Healthcare costs are not unreasonable in much of the developed and developing world. Most countries have better outcomes and lower costs than here in the US. As another commentor says, healthcare seems to be doing fine; you seem to be assuming the US is the norm when it isn’t.

I'm going to cauterize the off-topic debate about the US healthcare system by pointing out that OP was talking about the expense to doctors of malpractice insurance, not about costs to the patients or medical outcomes.

Malpratice liability varies widely by country, but it's a non-trivial expense for doctors everywhere, and significantly higher in states with strong tort liability for doctors.

It's hard to imagine a world with criminal liability (or tort liability) for software engineers that doesn't ultimately end up with a system of insurance for engineers, roughly analogous to the medical malpractice insurance system for physicians.


I am afraid that you are introducing the off-topic debate. The end goal of healthcare is better outcomes for lower prices. Likewise, the end goal of engineering should be better technology for lower costs.

That healthcare in other countries is able to achieve this in spite of the medical malpractice insurance system points to the fact that such a system is not certain have to have the deleterious effects you confidently assume.

Whether it is a burden for engineers is another question. But the article and the discussion aren’t about the inconveniences faced by the engineers who programmed this system.


>system of insurance for engineers

Which, as someone else noted, exists and is probably a good idea if you're an independent consultant or possibly a professional (i.e. licensed) engineer who signs off on drawings or other documents for clients or regulators.


I'm sure plenty of people would but that isn't the point. If you're writing code that potentially costs people their lives, you need to be able to be held accountable otherwise it will lead to negligence. This isn't a new problem... maybe for the software space, but not for industry as a whole.


Humans don't suddenly become perfect actors just because incentives align. The stress of that risk and efforts taken to mitigate it seems like it would actually make the software worse.

It's up to the product (the collective of individuals that deliver the product) to address and mitigate the risk it creates, that's not solely on the shoulders of individual software contributors.

If A writes a generic computer vision algorithm and open sources it, B integrates that into a "is this a bomb or not" product with a white paper outlining its failure rate in a specific situation, then C sells that product to D who uses it in an entirely different situation and E gets blown up... who gets sued? It definitely should be somebody, there should certainly be a liability and incentive to avoid such a liability but I it probably lies somewhere in C-D space, not A-B space.


I partially agree. “There is no reason the person who programs or the person who makes money from a self-driving car...”

The person who profits the most should be held the most responsible. But the separation of roles between the executives and the developers is likely to mean that no one gets punished at all.


Why should the person who profits most be held the most responsible? Suppose the person in charge of safety clearance makes less money than the original developer. How does that shift responsibility to the developer, as opposed to the situation where the developer makes less money?


They do have say in who they work for and what they work on. It's not as if a developer capable of doing self-driving work isn't in high demand. Maybe they should stop working for unethical companies doing unethical things. It's not as if the executives could do this work themselves.


> You can’t replace yourself with a computer program and then choose to take only the benefits and not the responsibilities.

Look, I'm all for developers and (software companies in general) to be considering the ethical implications of the work they do, and the moral obligations that they take on as a result of it. However:

> We need to discuss how the developers self-driving cars will be held accountable for the crimes they commit. There is no reason the person who programs or the person who makes money from a self-driving car should be held less accountable for a crime that if committed directly by a person would almost certainly result in jail time. You can’t replace yourself with a computer program and then choose to take only the benefits and not the responsibilities.

This is a bad mentality to take with postmortems for software failures in general, at least from the outset. You need to look at the underlying factors that contributed to the issue, not simply looking for a person to assign blame to. It's possible that negligence is the underlying cause, but not necessarily - and even if negligence is a cause, what were the other cultural factors that led to the negligence happening, without being caught somewhere else in the pipeline? It's tempting to look to assign blame, but if you do that, you'll actually miss out on the systemic improvements that would be necessary to prevent similar incidents in the future.

But moreover, this is a bad outlook to take here, because this wouldn't be criminal behavior if committed by a human. From the best we can tell, given the details available so far, it's an accident, and it's very rare for criminal charges to even be considered in accidents like these, unless it's a hit-and-run.


> this wouldn't be criminal behavior if committed by a human

I can assure you that a human who is driving carelessly would be held criminal liable. Why do you assume an accident that was severe enough to have resulted in a persons death—the car didn’t just scrape them because they ran across the street—is not due to a reckless programming?


There are soooo many instances of negligent drivers killing cyclists with basically no follow-up from the police. Police all over the US seem to consider cyclists as second-class road users, and trust the driver when they say a cyclist "came out of nowhere". Since these sorts of collisions are more often fatal for the cyclist than the driver, there often isn't anyone to tell the other side of the story. There are rarely criminal charges, and even more rarely convictions (juries are mostly drivers, not cyclists).


> I can assure you that a human who is driving carelessly would be held criminal liable.

Do you see evidence that the car was "driving carelessly"? That's an honest question - from the reporting so far, it doesn't seem clear what the underlying cause was.

Secondly, this is demonstrably false: most pedestrian fatalities by vehicles do not result in criminal charges. If you don't believe me, look up the stats. Or talk to the countless bikers' advocacy groups that have been lodging this exact complaint for decades: drivers are not generally held criminally responsible, unless there are mitigating circumstances (the driver is drunk, the accident was a hit-and-run, etc.).

> Why do you assume an accident that was severe enough to have resulted in a persons death—the car didn’t just scrape them because they ran across the street—is not due to a reckless programming?

When a pedestrian dies, just because they died, that doesn't mean the driver is automatically responsible. It could have been the pedestrian's fault, or it could have been the driver's fault. Or it could be both. Or it could even be neither (a true accident, with no assignment of blame).

The same thing holds here. You can't assume that this is the result of "reckless programming", and to be entirely blunt, by jumping to that conclusion on the basis of literally no evidence whatsoever (and misinterpreting existing case law on vehicular accidents in the process), you're actually undermining the success of any future efforts to prevent these sorts of accidents in the future, whether or not it ultimately turns out to be the fault of someone at Uber.


You have good points, thanks for discussing this. I think for me the fundamental problem is that with a human we can characterize reckless driving as driving that a normal, competent human would not do. But there is no “normal, competent” self-driving car-so by what standard do we determine the program’s behavior to be reckless as opposed to just acceptable?

I accept your point that this accident might not have led to criminal charges if a human had been responsible. But I don’t waver on my argument that if a human driver would have been held criminally responsible for this accident, then we should we hold the executives (or in extreme cases programmers) of Uber responsible in exactly the same way, whether that be criminal or not.

Finally, with humans and pedestrian fatalities many cases involve drunk driving or sleepy driving. Self-driving cars can’t get drunk or sleepy; they can just have bad programming or bad hardware, both installed by their manufacturer.


If you hold developers responsible, you can kiss self driving cars goodbye.

What should be passed (but I can't see how) is a percentage of allowed deaths, at least in the early years, and set it to something like 5-10% of the current rate, reducing downwards to 1% after 20 years.

People will die from self driving cars, and undoubtably their will eventually be a case that is 100% the self driving car's fault. The benefit of self driving cars comes from the mistake being permanently fixed, while with human drivers it can be committed over and over again.

There needs to be some kind of protection on the companies (and obviously the developers, I've never heard someone try to say they should be held responsible before) from lawsuits. Otherwise all it'll take a is a small handful beefore companies will just let it die.


> If you hold developers responsible, you can kiss self driving cars goodbye

Civil engineering and medical device manufacturing seems to be doing fine, despite having similar principles of engineers' liability.


> Civil engineering and medical device manufacturing seems to be doing fine, despite having similar principles of engineers' liability.

The idea that software engineers should be held responsible for something that (as far as we can tell so far) was an accident and not the result of negligence or malice is several orders of magnitude beyond the level of liability that civil engineers and medical device manufacturers have.


> that software engineers should be held responsible for something that (as far as we can tell so far) was an accident and not the result of negligence or malice

Nobody said that. The original comment said developers should be held "accountable for a crime that if committed directly by a person would almost certainly result in jail time" [1].

The standards from medical devices and/or civil engineering, with the associated licensing requirements and verification processes, make sense. Even in the case of a careless mistake or strategic oversight, individuals who could have known but nevertheless signed off should be identified, if not explicitly punished.

[1] https://news.ycombinator.com/user?id=jonathanyc


> > that software engineers should be held responsible for something that (as far as we can tell so far) was an accident and not the result of negligence or malice

> Nobody said that.

Well, they quite literally did, because the original comment in this thread was:

> We need to discuss how the developers self-driving cars will be held accountable for the crimes they commit. There is no reason the person who programs or the person who makes money from a self-driving car should be held less accountable for a crime that if committed directly by a person would almost certainly result in jail time. You can’t replace yourself with a computer program and then choose to take only the benefits and not the responsibilities.

I guess you can quibble about the difference between "accountable" and "liable", but that's not a discussion that's particularly interesting to have here, especially given OP's other comments in this thread which make it quite clear that this is what they had in mind.


The quibble in this case would be the meaning of "developers." In the case of a medical device, the developer is considered to be the Manufacturer, not the specific software developers on the team. Considering how often teams change, etc., using the latter definition would be meaningless.


If it was an accident and not the result of negligence or malice, what is the crime for which the developer would be prosecuted?

If the developer was negligent or malicious in their duties, why not prosecute them?


An act doesn’t become okay because two people (the executive and developer) and a robot are now responsible instead of one. What is your justification for the sort of utilitarian calculation you’ve made here? Why do you assume self-driving cars will be safer without any evidence?

If we are going to be arguing from a utilitarian standpoint, suppose we hold the executives of self-driving car companies as responsible as if they were themselves drivers. Then if self-driving cars truly are safer as you optimistically claim, both fewer people will die from accidents involving them and fewer people will go to jail for those same accidents. Seems like a win to me.


[flagged]


Why is it at all quite obvious? How is arguing for being careful “such a stupid argument it’s really just not worth anyone’s time to entertain”?

Someone in this discussion has an insane amount of blind faith in technology which here literally killed a pedestrian, and it’s not the people who are arguing for just consequences.


Are you arguing that a machine does not have better reaction times than a human being? Are you arguing that a machine can fall asleep, drink and drive, panic in a high stress situation?

Aren't you the same person who called for holding the developer liable for writing software with a bug? Are you accusing the developer of promising something that is impossible (not hitting a pedestrian in a crosswalk?) or simply implementing it wrong?

It's worth pointing out that we have no idea yet who is at fault in this accident. It could easily be someone who simply walked out in front of traffic when they weren't paying attention.


"Are you saying X" is a pretty aggressive way to frame your argument.

The above poster seems pretty clear that it is NOT obvious that cars will necessarily drive safer than humans on average, in the same way it is NOT obvious that we will ever have General Artificial Intelligence.

These are very complicated problems, and the machines are currently (significantly) worse than human drivers, so I think it's fair to question the argument that "everything will work out eventually"


I think the idea that self driving cars "may not ever be" safer than human drivers is ludicrous, even fatuous. We set an abysmally low bar for safety.

I think that is why the very assertive "Are you saying..." is appropriate.


Maybe you're right, progress is inevitable.

But humans always overestimate the rate of progress, and think we will be living in some amazing futurescape in the next 10 years.


The answer to your first “question” of course is that it depends on the machine, how it’s built, programmed, and the context of operation. Machines can have much faster reflexes, or they can freeze.


Machines will have slower reaction times.

Theoretically it could be otherwise, perhaps, though the human brain has an extremely parallel pattern-matching engine honed by about half a billion years of evolution.

Realistically, the self-driving system will be made of layered distinct components that all add latency. This is how we build both hardware and software. An image is sensed, it gets compressed, it gets passed along the CAN bus, it gets queued up, it gets decompressed, it gets queued up again, object detection runs, the result of that gets queued up for the next stage... and before long you're lucky if you haven't burned a whole second of time.

Machines can drive aggressively.

There was a university that had self-driving cars do parallel parking... by drifting. Driving along, the car would find a parking spot on the other side of the road. It would steer hard to that side, break traction, swing the rear of the vehicle around sideways through a 180-degree turn, and finally skid sideways into the spot. The car did this perfectly.

That kind of ability is something that I personally don't have. I would consider a self-driving car that could do this. If I'm paying, and that kind of driving is my preference, I expect to get it.


I really don't want you to get your wish. We have no need to invest in flashy self-driving car stuntmen, building a car that can get you from A to B safely and in a reasonable time frame is all that we should be aiming for.

That sort of drifting parallel park might work most or nearly all of the time, but if the road conditions are poor and the car loses handling then it will be a lot more risky.


The camera feed going straight to the neural network will not have a lot of latency. The neural net will not take very long to process the image and make a decision. Humans need at best half a second and at worst several seconds to recognize, process, and act. These systems are designed to be fast to responds. They do not have a second of latency.


> What should be passed (but I can't see how) is a percentage of allowed deaths, at least in the early years, and set it to something like 5-10% of the current rate, reducing downwards to 1% after 20 years.

Whith so few self driving cars that number sold be zero. If you can't assure safety with a few cars whith a human as backup, you should not be in the streets. And it's not the first dangerous accident of an Uber self driving car where Uber was at fault.


I'm guessing people are downvoting because of the implication that the software team should be held responsible?

Something does feel wrong about punishing them when the decision to put the car on the road in the first place was almost certainly not their own.

Though I agree Uber should be held accountable for it and it shouldn't be a token fine since the whole point of punishing an accident like this is to discourage them from occurring in the first place.

This sort of accident orchestrated by a group of people probably won't be gracefully handled by our legal system.


The developers have no control over the sensors, tires, weight of car, testing budget, human backup operator, or a million other things that went into this happening. Hell developers probably told who ever they could to not release this. Management at Uber and who ever approved this thing should be held accountable.


> The developers have no control over the sensors, tires, weight of car, testing budget, human backup operator, or a million other things that went into this happening

If one of those things caused the accident, the developer isn't to blame. Civil engineering has experience tracing liability from mistakes (and incentivizing prevention).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: